Test Report: Hyper-V_Windows 18872

                    
                      e5a45a5ea9a7bb508c00b9c70a33890e15fde7d2:2024-05-14:34460
                    
                

Test fail (15/210)

x
+
TestAddons/parallel/Registry (64.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 18.6007ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-tgmnx" [f9ec5856-bcdf-46bc-ba1e-99b369c17e30] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0091717s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-h8d79" [d53e7630-43a1-40c6-98ce-c03f26363d5d] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0227404s
addons_test.go:340: (dbg) Run:  kubectl --context addons-596400 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-596400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-596400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.5183138s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 ip: (2.3968201s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0513 22:28:47.530614    9720 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-596400 ip"
2024/05/13 22:28:49 [DEBUG] GET http://172.23.108.148:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 addons disable registry --alsologtostderr -v=1: (15.0964623s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-596400 -n addons-596400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-596400 -n addons-596400: (11.5882708s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 logs -n 25: (8.0930791s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-977400 | minikube5\jenkins | v1.33.1 | 13 May 24 22:21 UTC |                     |
	|         | -p download-only-977400                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:22 UTC |
	| delete  | -p download-only-977400                                                                     | download-only-977400 | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:22 UTC |
	| start   | -o=json --download-only                                                                     | download-only-676200 | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC |                     |
	|         | -p download-only-676200                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:22 UTC |
	| delete  | -p download-only-676200                                                                     | download-only-676200 | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:22 UTC |
	| delete  | -p download-only-977400                                                                     | download-only-977400 | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:22 UTC |
	| delete  | -p download-only-676200                                                                     | download-only-676200 | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-583000 | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC |                     |
	|         | binary-mirror-583000                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:49580                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-583000                                                                     | binary-mirror-583000 | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:22 UTC |
	| addons  | disable dashboard -p                                                                        | addons-596400        | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC |                     |
	|         | addons-596400                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-596400        | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC |                     |
	|         | addons-596400                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-596400 --wait=true                                                                | addons-596400        | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-596400        | minikube5\jenkins | v1.33.1 | 13 May 24 22:28 UTC | 13 May 24 22:28 UTC |
	|         | addons-596400                                                                               |                      |                   |         |                     |                     |
	| ssh     | addons-596400 ssh cat                                                                       | addons-596400        | minikube5\jenkins | v1.33.1 | 13 May 24 22:28 UTC | 13 May 24 22:28 UTC |
	|         | /opt/local-path-provisioner/pvc-da9206f4-c917-4595-b5c0-874e94c44c3c_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-596400 ip                                                                            | addons-596400        | minikube5\jenkins | v1.33.1 | 13 May 24 22:28 UTC | 13 May 24 22:28 UTC |
	| addons  | addons-596400 addons disable                                                                | addons-596400        | minikube5\jenkins | v1.33.1 | 13 May 24 22:28 UTC | 13 May 24 22:29 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-596400 addons disable                                                                | addons-596400        | minikube5\jenkins | v1.33.1 | 13 May 24 22:28 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-596400 addons disable                                                                | addons-596400        | minikube5\jenkins | v1.33.1 | 13 May 24 22:29 UTC |                     |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 22:22:27
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 22:22:27.133982   14020 out.go:291] Setting OutFile to fd 784 ...
	I0513 22:22:27.134989   14020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:22:27.134989   14020 out.go:304] Setting ErrFile to fd 812...
	I0513 22:22:27.134989   14020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:22:27.151984   14020 out.go:298] Setting JSON to false
	I0513 22:22:27.153991   14020 start.go:129] hostinfo: {"hostname":"minikube5","uptime":510,"bootTime":1715638436,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0513 22:22:27.154995   14020 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:22:27.158984   14020 out.go:177] * [addons-596400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:22:27.163701   14020 notify.go:220] Checking for updates...
	I0513 22:22:27.165687   14020 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:22:27.168445   14020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 22:22:27.171612   14020 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0513 22:22:27.174057   14020 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 22:22:27.177230   14020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 22:22:27.180902   14020 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:22:32.205635   14020 out.go:177] * Using the hyperv driver based on user configuration
	I0513 22:22:32.209162   14020 start.go:297] selected driver: hyperv
	I0513 22:22:32.209162   14020 start.go:901] validating driver "hyperv" against <nil>
	I0513 22:22:32.209162   14020 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 22:22:32.247765   14020 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 22:22:32.249623   14020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 22:22:32.249715   14020 cni.go:84] Creating CNI manager for ""
	I0513 22:22:32.249715   14020 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 22:22:32.249809   14020 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 22:22:32.250016   14020 start.go:340] cluster config:
	{Name:addons-596400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-596400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:22:32.250420   14020 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 22:22:32.254669   14020 out.go:177] * Starting "addons-596400" primary control-plane node in "addons-596400" cluster
	I0513 22:22:32.257633   14020 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:22:32.257838   14020 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 22:22:32.257859   14020 cache.go:56] Caching tarball of preloaded images
	I0513 22:22:32.258172   14020 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 22:22:32.258293   14020 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 22:22:32.258867   14020 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\config.json ...
	I0513 22:22:32.259064   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\config.json: {Name:mk5a42c8ca7336469cfe972cdd2518dfbfc83c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:22:32.259245   14020 start.go:360] acquireMachinesLock for addons-596400: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 22:22:32.260170   14020 start.go:364] duration metric: took 89.8µs to acquireMachinesLock for "addons-596400"
	I0513 22:22:32.260323   14020 start.go:93] Provisioning new machine with config: &{Name:addons-596400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-596400 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 22:22:32.260484   14020 start.go:125] createHost starting for "" (driver="hyperv")
	I0513 22:22:32.264518   14020 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0513 22:22:32.264518   14020 start.go:159] libmachine.API.Create for "addons-596400" (driver="hyperv")
	I0513 22:22:32.264518   14020 client.go:168] LocalClient.Create starting
	I0513 22:22:32.264518   14020 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0513 22:22:32.347453   14020 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0513 22:22:32.700451   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0513 22:22:34.560859   14020 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0513 22:22:34.560859   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:34.560859   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0513 22:22:36.064905   14020 main.go:141] libmachine: [stdout =====>] : False
	
	I0513 22:22:36.064905   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:36.064905   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 22:22:37.351268   14020 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 22:22:37.351268   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:37.351268   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 22:22:40.640597   14020 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 22:22:40.640597   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:40.643557   14020 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0513 22:22:40.950377   14020 main.go:141] libmachine: Creating SSH key...
	I0513 22:22:41.188847   14020 main.go:141] libmachine: Creating VM...
	I0513 22:22:41.188847   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 22:22:43.628192   14020 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 22:22:43.628453   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:43.628453   14020 main.go:141] libmachine: Using switch "Default Switch"
	I0513 22:22:43.628453   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 22:22:45.110786   14020 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 22:22:45.110786   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:45.110786   14020 main.go:141] libmachine: Creating VHD
	I0513 22:22:45.110786   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0513 22:22:48.521434   14020 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D981BB91-58C0-4601-BCD7-39ADED4A8D9D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0513 22:22:48.521434   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:48.521434   14020 main.go:141] libmachine: Writing magic tar header
	I0513 22:22:48.521645   14020 main.go:141] libmachine: Writing SSH key tar header
	I0513 22:22:48.529683   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0513 22:22:51.443440   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:22:51.443440   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:51.444300   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\disk.vhd' -SizeBytes 20000MB
	I0513 22:22:53.716594   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:22:53.716594   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:53.717647   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-596400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0513 22:22:56.929218   14020 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-596400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0513 22:22:56.929421   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:56.929523   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-596400 -DynamicMemoryEnabled $false
	I0513 22:22:58.876141   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:22:58.876141   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:22:58.876690   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-596400 -Count 2
	I0513 22:23:00.792106   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:23:00.793024   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:00.793183   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-596400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\boot2docker.iso'
	I0513 22:23:03.041952   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:23:03.041952   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:03.041952   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-596400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\disk.vhd'
	I0513 22:23:05.333802   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:23:05.333887   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:05.333887   14020 main.go:141] libmachine: Starting VM...
	I0513 22:23:05.334021   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-596400
	I0513 22:23:08.118600   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:23:08.118951   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:08.118951   14020 main.go:141] libmachine: Waiting for host to start...
	I0513 22:23:08.118951   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:10.125472   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:10.125472   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:10.125709   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:23:12.341758   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:23:12.341758   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:13.348719   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:15.258606   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:15.258606   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:15.259711   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:23:17.463928   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:23:17.463928   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:18.465709   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:20.409057   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:20.409057   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:20.409537   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:23:22.626810   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:23:22.627554   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:23.637228   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:25.578351   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:25.578351   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:25.578351   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:23:27.791086   14020 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:23:27.791133   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:28.802516   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:30.736792   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:30.736792   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:30.736792   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:23:33.015767   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:23:33.015767   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:33.015767   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:34.820220   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:34.820332   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:34.820332   14020 machine.go:94] provisionDockerMachine start ...
	I0513 22:23:34.820514   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:36.707924   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:36.707924   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:36.708008   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:23:38.908555   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:23:38.908582   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:38.911853   14020 main.go:141] libmachine: Using SSH client type: native
	I0513 22:23:38.912463   14020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.148 22 <nil> <nil>}
	I0513 22:23:38.912463   14020 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 22:23:39.048034   14020 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 22:23:39.048129   14020 buildroot.go:166] provisioning hostname "addons-596400"
	I0513 22:23:39.048129   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:40.894678   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:40.894678   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:40.894678   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:23:43.121289   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:23:43.121289   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:43.125551   14020 main.go:141] libmachine: Using SSH client type: native
	I0513 22:23:43.125551   14020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.148 22 <nil> <nil>}
	I0513 22:23:43.125551   14020 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-596400 && echo "addons-596400" | sudo tee /etc/hostname
	I0513 22:23:43.279897   14020 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-596400
	
	I0513 22:23:43.280130   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:45.160692   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:45.160692   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:45.160764   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:23:47.390122   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:23:47.390122   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:47.393746   14020 main.go:141] libmachine: Using SSH client type: native
	I0513 22:23:47.393943   14020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.148 22 <nil> <nil>}
	I0513 22:23:47.393943   14020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-596400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-596400/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-596400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 22:23:47.540687   14020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 22:23:47.540687   14020 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 22:23:47.540687   14020 buildroot.go:174] setting up certificates
	I0513 22:23:47.540687   14020 provision.go:84] configureAuth start
	I0513 22:23:47.540687   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:49.409645   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:49.410464   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:49.410464   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:23:51.621264   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:23:51.621264   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:51.621627   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:53.497464   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:53.497464   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:53.497576   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:23:55.731732   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:23:55.731773   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:55.731773   14020 provision.go:143] copyHostCerts
	I0513 22:23:55.731892   14020 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 22:23:55.733035   14020 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 22:23:55.733997   14020 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 22:23:55.734220   14020 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-596400 san=[127.0.0.1 172.23.108.148 addons-596400 localhost minikube]
	I0513 22:23:56.018884   14020 provision.go:177] copyRemoteCerts
	I0513 22:23:56.025875   14020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 22:23:56.025875   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:23:57.851118   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:23:57.851118   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:23:57.852387   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:00.086811   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:00.087690   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:00.088144   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:24:00.193898   14020 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1678928s)
	I0513 22:24:00.194477   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 22:24:00.234793   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0513 22:24:00.272574   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0513 22:24:00.311704   14020 provision.go:87] duration metric: took 12.7708388s to configureAuth
	I0513 22:24:00.311816   14020 buildroot.go:189] setting minikube options for container-runtime
	I0513 22:24:00.312360   14020 config.go:182] Loaded profile config "addons-596400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:24:00.312360   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:24:02.209137   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:24:02.209186   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:02.209186   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:04.427068   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:04.427068   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:04.430912   14020 main.go:141] libmachine: Using SSH client type: native
	I0513 22:24:04.431036   14020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.148 22 <nil> <nil>}
	I0513 22:24:04.431036   14020 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 22:24:04.573223   14020 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 22:24:04.573223   14020 buildroot.go:70] root file system type: tmpfs
	I0513 22:24:04.573223   14020 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 22:24:04.573223   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:24:06.478227   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:24:06.478304   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:06.478304   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:08.708329   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:08.708533   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:08.713624   14020 main.go:141] libmachine: Using SSH client type: native
	I0513 22:24:08.713624   14020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.148 22 <nil> <nil>}
	I0513 22:24:08.714274   14020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 22:24:08.871318   14020 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 22:24:08.871318   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:24:10.793611   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:24:10.793611   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:10.794062   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:13.021801   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:13.021801   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:13.028199   14020 main.go:141] libmachine: Using SSH client type: native
	I0513 22:24:13.028199   14020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.148 22 <nil> <nil>}
	I0513 22:24:13.028199   14020 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 22:24:15.063500   14020 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 22:24:15.063500   14020 machine.go:97] duration metric: took 40.242605s to provisionDockerMachine
	I0513 22:24:15.063662   14020 client.go:171] duration metric: took 1m42.7977405s to LocalClient.Create
	I0513 22:24:15.063795   14020 start.go:167] duration metric: took 1m42.7978738s to libmachine.API.Create "addons-596400"
	I0513 22:24:15.063958   14020 start.go:293] postStartSetup for "addons-596400" (driver="hyperv")
	I0513 22:24:15.063958   14020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 22:24:15.075569   14020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 22:24:15.075569   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:24:16.920138   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:24:16.920138   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:16.920479   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:19.130286   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:19.130286   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:19.130928   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:24:19.244469   14020 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1688034s)
	I0513 22:24:19.253410   14020 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 22:24:19.261578   14020 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 22:24:19.261578   14020 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 22:24:19.261578   14020 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 22:24:19.262153   14020 start.go:296] duration metric: took 4.1981349s for postStartSetup
	I0513 22:24:19.263881   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:24:21.117471   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:24:21.117778   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:21.117778   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:23.325182   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:23.325182   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:23.325552   14020 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\config.json ...
	I0513 22:24:23.327550   14020 start.go:128] duration metric: took 1m51.0655467s to createHost
	I0513 22:24:23.327692   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:24:25.220105   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:24:25.220105   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:25.220486   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:27.483971   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:27.483971   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:27.488261   14020 main.go:141] libmachine: Using SSH client type: native
	I0513 22:24:27.488789   14020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.148 22 <nil> <nil>}
	I0513 22:24:27.488867   14020 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 22:24:27.625610   14020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715639067.712378414
	
	I0513 22:24:27.625711   14020 fix.go:216] guest clock: 1715639067.712378414
	I0513 22:24:27.625711   14020 fix.go:229] Guest: 2024-05-13 22:24:27.712378414 +0000 UTC Remote: 2024-05-13 22:24:23.3276213 +0000 UTC m=+116.307671801 (delta=4.384757114s)
	I0513 22:24:27.625975   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:24:29.481363   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:24:29.481748   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:29.481823   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:31.778525   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:31.778525   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:31.782582   14020 main.go:141] libmachine: Using SSH client type: native
	I0513 22:24:31.783014   14020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.148 22 <nil> <nil>}
	I0513 22:24:31.783014   14020 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715639067
	I0513 22:24:31.930966   14020 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 22:24:27 UTC 2024
	
	I0513 22:24:31.930966   14020 fix.go:236] clock set: Mon May 13 22:24:27 UTC 2024
	 (err=<nil>)
	I0513 22:24:31.931034   14020 start.go:83] releasing machines lock for "addons-596400", held for 1m59.6691538s
	I0513 22:24:31.931073   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:24:33.812361   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:24:33.812361   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:33.812436   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:36.104762   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:36.104762   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:36.108318   14020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 22:24:36.108318   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:24:36.115660   14020 ssh_runner.go:195] Run: cat /version.json
	I0513 22:24:36.115697   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:24:38.051970   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:24:38.052815   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:38.052815   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:38.086244   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:24:38.086983   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:38.086983   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:24:40.456255   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:40.456255   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:40.456773   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:24:40.483478   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:24:40.483478   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:24:40.484095   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:24:40.550688   14020 ssh_runner.go:235] Completed: cat /version.json: (4.4349278s)
	I0513 22:24:40.558927   14020 ssh_runner.go:195] Run: systemctl --version
	I0513 22:24:40.786337   14020 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6778615s)
	I0513 22:24:40.797488   14020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0513 22:24:40.806115   14020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 22:24:40.814568   14020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 22:24:40.841532   14020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 22:24:40.841532   14020 start.go:494] detecting cgroup driver to use...
	I0513 22:24:40.841532   14020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 22:24:40.883165   14020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 22:24:40.910200   14020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 22:24:40.929034   14020 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 22:24:40.937710   14020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 22:24:40.964478   14020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 22:24:40.991666   14020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 22:24:41.029024   14020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 22:24:41.062689   14020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 22:24:41.092427   14020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 22:24:41.122208   14020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 22:24:41.149037   14020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 22:24:41.179297   14020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 22:24:41.204722   14020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 22:24:41.232342   14020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:24:41.425429   14020 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 22:24:41.455815   14020 start.go:494] detecting cgroup driver to use...
	I0513 22:24:41.465891   14020 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 22:24:41.498625   14020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 22:24:41.528576   14020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 22:24:41.572492   14020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 22:24:41.606105   14020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 22:24:41.637856   14020 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 22:24:41.698399   14020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 22:24:41.723313   14020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 22:24:41.765475   14020 ssh_runner.go:195] Run: which cri-dockerd
	I0513 22:24:41.779283   14020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 22:24:41.793703   14020 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 22:24:41.831689   14020 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 22:24:42.016988   14020 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 22:24:42.195497   14020 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 22:24:42.195760   14020 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 22:24:42.239075   14020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:24:42.419726   14020 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 22:24:44.898082   14020 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4782027s)
	I0513 22:24:44.908440   14020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 22:24:44.938693   14020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 22:24:44.968012   14020 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 22:24:45.140170   14020 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 22:24:45.315991   14020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:24:45.498915   14020 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 22:24:45.533268   14020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 22:24:45.565869   14020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:24:45.741980   14020 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 22:24:45.842169   14020 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 22:24:45.851227   14020 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 22:24:45.861862   14020 start.go:562] Will wait 60s for crictl version
	I0513 22:24:45.870492   14020 ssh_runner.go:195] Run: which crictl
	I0513 22:24:45.886249   14020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 22:24:45.940629   14020 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 22:24:45.947557   14020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 22:24:45.983895   14020 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 22:24:46.015850   14020 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 22:24:46.015850   14020 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 22:24:46.018848   14020 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 22:24:46.018848   14020 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 22:24:46.018848   14020 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 22:24:46.018848   14020 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 22:24:46.020848   14020 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 22:24:46.020848   14020 ip.go:210] interface addr: 172.23.96.1/20
	I0513 22:24:46.028846   14020 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 22:24:46.035550   14020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 22:24:46.055892   14020 kubeadm.go:877] updating cluster {Name:addons-596400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-596400 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.108.148 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0513 22:24:46.056488   14020 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:24:46.065879   14020 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 22:24:46.085923   14020 docker.go:685] Got preloaded images: 
	I0513 22:24:46.085923   14020 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0513 22:24:46.093479   14020 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 22:24:46.124405   14020 ssh_runner.go:195] Run: which lz4
	I0513 22:24:46.153746   14020 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0513 22:24:46.160812   14020 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0513 22:24:46.160812   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0513 22:24:47.291043   14020 docker.go:649] duration metric: took 1.1482785s to copy over tarball
	I0513 22:24:47.301019   14020 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0513 22:24:52.337580   14020 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.0364213s)
	I0513 22:24:52.392852   14020 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0513 22:24:52.453887   14020 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 22:24:52.472768   14020 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0513 22:24:52.512812   14020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:24:52.699383   14020 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 22:24:58.300096   14020 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6006322s)
	I0513 22:24:58.310979   14020 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 22:24:58.332587   14020 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 22:24:58.332587   14020 cache_images.go:84] Images are preloaded, skipping loading
	I0513 22:24:58.332587   14020 kubeadm.go:928] updating node { 172.23.108.148 8443 v1.30.0 docker true true} ...
	I0513 22:24:58.333655   14020 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-596400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.108.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-596400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 22:24:58.341889   14020 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0513 22:24:58.372692   14020 cni.go:84] Creating CNI manager for ""
	I0513 22:24:58.372692   14020 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 22:24:58.372692   14020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0513 22:24:58.372692   14020 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.108.148 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-596400 NodeName:addons-596400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.108.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.108.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0513 22:24:58.372692   14020 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.108.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-596400"
	  kubeletExtraArgs:
	    node-ip: 172.23.108.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.108.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0513 22:24:58.385091   14020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 22:24:58.401458   14020 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 22:24:58.410144   14020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0513 22:24:58.424763   14020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0513 22:24:58.453576   14020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 22:24:58.484786   14020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0513 22:24:58.524041   14020 ssh_runner.go:195] Run: grep 172.23.108.148	control-plane.minikube.internal$ /etc/hosts
	I0513 22:24:58.528366   14020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.108.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 22:24:58.560373   14020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:24:58.729186   14020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 22:24:58.753578   14020 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400 for IP: 172.23.108.148
	I0513 22:24:58.754003   14020 certs.go:194] generating shared ca certs ...
	I0513 22:24:58.754044   14020 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:58.754386   14020 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 22:24:58.916198   14020 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt ...
	I0513 22:24:58.917198   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt: {Name:mkecc83abf7dbcd2f2b0fd63bac36f2a7fe554cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:58.918510   14020 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key ...
	I0513 22:24:58.918510   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key: {Name:mk56e2872d5c5070a04729e59e76e7398d15f15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:58.919058   14020 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 22:24:59.166524   14020 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0513 22:24:59.166524   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkfcb9723e08b8d76b8a2e73084c13f930548396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:59.167540   14020 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key ...
	I0513 22:24:59.167540   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkd23bfd48ce10457a367dee40c81533c5cc7b5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:59.169676   14020 certs.go:256] generating profile certs ...
	I0513 22:24:59.170041   14020 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.key
	I0513 22:24:59.170041   14020 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt with IP's: []
	I0513 22:24:59.225740   14020 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt ...
	I0513 22:24:59.225740   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: {Name:mk928054881e484ee1c92e960c00eb2934c9574d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:59.226924   14020 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.key ...
	I0513 22:24:59.226924   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.key: {Name:mk4cfea1b4f5c50ced2922d66575312ce89d04f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:59.227962   14020 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.key.9af7f606
	I0513 22:24:59.227962   14020 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.crt.9af7f606 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.108.148]
	I0513 22:24:59.414690   14020 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.crt.9af7f606 ...
	I0513 22:24:59.414690   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.crt.9af7f606: {Name:mk179ecd12dd1d6910d1c4d02c40b0e9853d3481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:59.415530   14020 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.key.9af7f606 ...
	I0513 22:24:59.415530   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.key.9af7f606: {Name:mkd2ee23309ac1adc5c77a13f16fdfac2208c769 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:59.416504   14020 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.crt.9af7f606 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.crt
	I0513 22:24:59.430607   14020 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.key.9af7f606 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.key
	I0513 22:24:59.431609   14020 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\proxy-client.key
	I0513 22:24:59.431724   14020 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\proxy-client.crt with IP's: []
	I0513 22:24:59.660283   14020 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\proxy-client.crt ...
	I0513 22:24:59.660283   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\proxy-client.crt: {Name:mk4ce2647622a463f3fc6485bb4165d25e3570e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:59.661061   14020 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\proxy-client.key ...
	I0513 22:24:59.661061   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\proxy-client.key: {Name:mk19aac1068acaba75fdafa281220bbcb1f762db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:24:59.674276   14020 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 22:24:59.680990   14020 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 22:24:59.686850   14020 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 22:24:59.692444   14020 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 22:24:59.698441   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 22:24:59.747917   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 22:24:59.787589   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 22:24:59.827143   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 22:24:59.867566   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0513 22:24:59.905848   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0513 22:24:59.947713   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 22:24:59.987688   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0513 22:25:00.031231   14020 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 22:25:00.071407   14020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0513 22:25:00.110148   14020 ssh_runner.go:195] Run: openssl version
	I0513 22:25:00.126921   14020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 22:25:00.154156   14020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:25:00.161254   14020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:25:00.170651   14020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:25:00.186039   14020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 22:25:00.216488   14020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 22:25:00.223908   14020 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 22:25:00.223951   14020 kubeadm.go:391] StartCluster: {Name:addons-596400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-596400 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.108.148 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:25:00.231107   14020 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 22:25:00.261017   14020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0513 22:25:00.289441   14020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 22:25:00.313982   14020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 22:25:00.330420   14020 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 22:25:00.330462   14020 kubeadm.go:156] found existing configuration files:
	
	I0513 22:25:00.338755   14020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0513 22:25:00.354218   14020 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 22:25:00.362684   14020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 22:25:00.388838   14020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0513 22:25:00.405083   14020 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 22:25:00.414160   14020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 22:25:00.437496   14020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0513 22:25:00.454505   14020 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 22:25:00.462877   14020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 22:25:00.485428   14020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0513 22:25:00.500619   14020 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 22:25:00.512896   14020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 22:25:00.528487   14020 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0513 22:25:00.725406   14020 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0513 22:25:13.915950   14020 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0513 22:25:13.915950   14020 kubeadm.go:309] [preflight] Running pre-flight checks
	I0513 22:25:13.916544   14020 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0513 22:25:13.916544   14020 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0513 22:25:13.917070   14020 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0513 22:25:13.917359   14020 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0513 22:25:13.919954   14020 out.go:204]   - Generating certificates and keys ...
	I0513 22:25:13.919954   14020 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0513 22:25:13.920538   14020 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0513 22:25:13.920538   14020 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0513 22:25:13.920538   14020 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0513 22:25:13.921063   14020 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0513 22:25:13.921127   14020 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0513 22:25:13.921127   14020 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0513 22:25:13.921127   14020 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-596400 localhost] and IPs [172.23.108.148 127.0.0.1 ::1]
	I0513 22:25:13.921127   14020 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0513 22:25:13.921909   14020 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-596400 localhost] and IPs [172.23.108.148 127.0.0.1 ::1]
	I0513 22:25:13.922252   14020 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0513 22:25:13.922252   14020 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0513 22:25:13.922252   14020 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0513 22:25:13.922252   14020 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0513 22:25:13.922780   14020 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0513 22:25:13.922993   14020 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0513 22:25:13.922993   14020 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0513 22:25:13.922993   14020 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0513 22:25:13.922993   14020 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0513 22:25:13.923765   14020 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0513 22:25:13.923927   14020 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0513 22:25:13.926668   14020 out.go:204]   - Booting up control plane ...
	I0513 22:25:13.927211   14020 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0513 22:25:13.927283   14020 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0513 22:25:13.927283   14020 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0513 22:25:13.927283   14020 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 22:25:13.927971   14020 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 22:25:13.927971   14020 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0513 22:25:13.927971   14020 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0513 22:25:13.928558   14020 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0513 22:25:13.928558   14020 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00206497s
	I0513 22:25:13.928558   14020 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0513 22:25:13.929090   14020 kubeadm.go:309] [api-check] The API server is healthy after 6.502303535s
	I0513 22:25:13.929151   14020 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0513 22:25:13.929686   14020 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0513 22:25:13.929729   14020 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0513 22:25:13.929729   14020 kubeadm.go:309] [mark-control-plane] Marking the node addons-596400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0513 22:25:13.930260   14020 kubeadm.go:309] [bootstrap-token] Using token: naijkz.yo5fjwzpqz17j8dr
	I0513 22:25:13.932778   14020 out.go:204]   - Configuring RBAC rules ...
	I0513 22:25:13.933318   14020 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0513 22:25:13.933400   14020 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0513 22:25:13.933400   14020 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0513 22:25:13.933990   14020 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0513 22:25:13.933990   14020 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0513 22:25:13.933990   14020 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0513 22:25:13.934723   14020 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0513 22:25:13.934723   14020 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0513 22:25:13.934723   14020 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0513 22:25:13.934723   14020 kubeadm.go:309] 
	I0513 22:25:13.934723   14020 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0513 22:25:13.934723   14020 kubeadm.go:309] 
	I0513 22:25:13.935252   14020 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0513 22:25:13.935320   14020 kubeadm.go:309] 
	I0513 22:25:13.935320   14020 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0513 22:25:13.935320   14020 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0513 22:25:13.935320   14020 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0513 22:25:13.935320   14020 kubeadm.go:309] 
	I0513 22:25:13.935320   14020 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0513 22:25:13.935320   14020 kubeadm.go:309] 
	I0513 22:25:13.935320   14020 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0513 22:25:13.935320   14020 kubeadm.go:309] 
	I0513 22:25:13.935916   14020 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0513 22:25:13.935916   14020 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0513 22:25:13.935916   14020 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0513 22:25:13.935916   14020 kubeadm.go:309] 
	I0513 22:25:13.935916   14020 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0513 22:25:13.936517   14020 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0513 22:25:13.936517   14020 kubeadm.go:309] 
	I0513 22:25:13.936517   14020 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token naijkz.yo5fjwzpqz17j8dr \
	I0513 22:25:13.937041   14020 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 \
	I0513 22:25:13.937074   14020 kubeadm.go:309] 	--control-plane 
	I0513 22:25:13.937119   14020 kubeadm.go:309] 
	I0513 22:25:13.937119   14020 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0513 22:25:13.937119   14020 kubeadm.go:309] 
	I0513 22:25:13.937119   14020 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token naijkz.yo5fjwzpqz17j8dr \
	I0513 22:25:13.937119   14020 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0513 22:25:13.937648   14020 cni.go:84] Creating CNI manager for ""
	I0513 22:25:13.937709   14020 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 22:25:13.939844   14020 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0513 22:25:13.951663   14020 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0513 22:25:13.968603   14020 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0513 22:25:14.000633   14020 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0513 22:25:14.013541   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:14.016717   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-596400 minikube.k8s.io/updated_at=2024_05_13T22_25_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=addons-596400 minikube.k8s.io/primary=true
	I0513 22:25:14.026867   14020 ops.go:34] apiserver oom_adj: -16
	I0513 22:25:14.164705   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:14.672451   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:15.177031   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:15.667466   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:16.181039   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:16.670620   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:17.177891   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:17.670359   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:18.183764   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:18.676061   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:19.179968   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:19.679293   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:20.181108   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:20.675192   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:21.175193   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:21.683277   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:22.180882   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:22.680533   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:23.167482   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:23.675862   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:24.172208   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:24.681296   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:25.177581   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:25.667933   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:26.181810   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:26.671374   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:27.179221   14020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:25:27.300153   14020 kubeadm.go:1107] duration metric: took 13.2993268s to wait for elevateKubeSystemPrivileges
	W0513 22:25:27.300153   14020 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0513 22:25:27.300153   14020 kubeadm.go:393] duration metric: took 27.0758092s to StartCluster
	I0513 22:25:27.300153   14020 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:25:27.300679   14020 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:25:27.301768   14020 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:25:27.303240   14020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0513 22:25:27.303240   14020 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.108.148 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 22:25:27.303240   14020 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0513 22:25:27.307593   14020 out.go:177] * Verifying Kubernetes components...
	I0513 22:25:27.303240   14020 addons.go:69] Setting yakd=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting ingress-dns=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting ingress=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting inspektor-gadget=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting cloud-spanner=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting metrics-server=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting default-storageclass=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting gcp-auth=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting volumesnapshots=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting helm-tiller=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting registry=true in profile "addons-596400"
	I0513 22:25:27.303774   14020 addons.go:69] Setting storage-provisioner=true in profile "addons-596400"
	I0513 22:25:27.307593   14020 addons.go:234] Setting addon yakd=true in "addons-596400"
	I0513 22:25:27.309804   14020 addons.go:234] Setting addon inspektor-gadget=true in "addons-596400"
	I0513 22:25:27.309804   14020 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-596400"
	I0513 22:25:27.309804   14020 addons.go:234] Setting addon volumesnapshots=true in "addons-596400"
	I0513 22:25:27.309804   14020 addons.go:234] Setting addon metrics-server=true in "addons-596400"
	I0513 22:25:27.309804   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.309804   14020 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-596400"
	I0513 22:25:27.309804   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.309804   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.309804   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.309804   14020 addons.go:234] Setting addon registry=true in "addons-596400"
	I0513 22:25:27.309804   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.309804   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.310817   14020 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-596400"
	I0513 22:25:27.310817   14020 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-596400"
	I0513 22:25:27.310817   14020 addons.go:234] Setting addon helm-tiller=true in "addons-596400"
	I0513 22:25:27.310817   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.311805   14020 addons.go:234] Setting addon cloud-spanner=true in "addons-596400"
	I0513 22:25:27.311805   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.311805   14020 addons.go:234] Setting addon storage-provisioner=true in "addons-596400"
	I0513 22:25:27.311805   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.311805   14020 mustload.go:65] Loading cluster: addons-596400
	I0513 22:25:27.311805   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.309804   14020 addons.go:234] Setting addon ingress-dns=true in "addons-596400"
	I0513 22:25:27.312822   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.309804   14020 addons.go:234] Setting addon ingress=true in "addons-596400"
	I0513 22:25:27.314810   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:27.315808   14020 config.go:182] Loaded profile config "addons-596400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:25:27.315808   14020 config.go:182] Loaded profile config "addons-596400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:25:27.319810   14020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:25:27.320810   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.321812   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.322811   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.323809   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.323809   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.323809   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.323809   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.324811   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.324811   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.324811   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.324811   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.324811   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.324811   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.324811   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:27.324811   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:28.082733   14020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0513 22:25:28.125644   14020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 22:25:29.873228   14020 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.7904687s)
	I0513 22:25:29.873228   14020 start.go:946] {"host.minikube.internal": 172.23.96.1} host record injected into CoreDNS's ConfigMap
	I0513 22:25:29.878230   14020 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.7525609s)
	I0513 22:25:29.880269   14020 node_ready.go:35] waiting up to 6m0s for node "addons-596400" to be "Ready" ...
	I0513 22:25:29.931255   14020 node_ready.go:49] node "addons-596400" has status "Ready":"True"
	I0513 22:25:29.931255   14020 node_ready.go:38] duration metric: took 50.9845ms for node "addons-596400" to be "Ready" ...
	I0513 22:25:29.931255   14020 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 22:25:29.962256   14020 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qlsw9" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:30.490257   14020 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-596400" context rescaled to 1 replicas
	I0513 22:25:32.085339   14020 pod_ready.go:102] pod "coredns-7db6d8ff4d-qlsw9" in "kube-system" namespace has status "Ready":"False"
	I0513 22:25:33.819498   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.819498   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.820500   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.820500   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.825756   14020 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0513 22:25:33.821492   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.821492   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.822494   14020 addons.go:234] Setting addon default-storageclass=true in "addons-596400"
	I0513 22:25:33.822494   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.825756   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.825756   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.828480   14020 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0513 22:25:33.828480   14020 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0513 22:25:33.831400   14020 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0513 22:25:33.828480   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.828480   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.828480   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.828480   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.828480   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:33.830542   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.830674   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.831400   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.834269   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.836441   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.836441   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.837694   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.837694   14020 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-596400"
	I0513 22:25:33.838224   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.840587   14020 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0513 22:25:33.840587   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.840587   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.840587   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.844705   14020 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0513 22:25:33.844705   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.847716   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.849723   14020 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0513 22:25:33.855705   14020 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0513 22:25:33.850709   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.850709   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.849723   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.850709   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.850709   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:33.851719   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:33.855705   14020 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0513 22:25:33.858711   14020 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0513 22:25:33.858711   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:33.862725   14020 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0513 22:25:33.866722   14020 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0513 22:25:33.870740   14020 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 22:25:33.870740   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.872742   14020 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0513 22:25:33.872742   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:33.874735   14020 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0513 22:25:33.874735   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.877720   14020 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0513 22:25:33.877720   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0513 22:25:33.881724   14020 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0513 22:25:33.884725   14020 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0513 22:25:33.888724   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.893728   14020 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0513 22:25:33.895722   14020 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0513 22:25:33.897716   14020 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 22:25:33.899724   14020 out.go:177]   - Using image docker.io/registry:2.8.3
	I0513 22:25:33.899724   14020 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0513 22:25:33.899724   14020 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0513 22:25:33.899724   14020 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0513 22:25:33.902710   14020 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0513 22:25:33.902710   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0513 22:25:33.902710   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0513 22:25:33.902710   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 22:25:33.906713   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.906713   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0513 22:25:33.906713   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.906713   14020 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0513 22:25:33.906713   14020 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0513 22:25:33.906713   14020 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0513 22:25:33.906713   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.906713   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.912722   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.912722   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.918709   14020 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0513 22:25:33.927928   14020 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0513 22:25:33.930962   14020 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0513 22:25:33.930962   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0513 22:25:33.930962   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:33.923723   14020 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0513 22:25:33.991034   14020 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0513 22:25:33.980419   14020 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0513 22:25:34.015035   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0513 22:25:34.015035   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:34.019040   14020 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0513 22:25:34.032529   14020 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0513 22:25:34.036523   14020 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0513 22:25:34.036523   14020 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0513 22:25:34.036523   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:34.523248   14020 pod_ready.go:102] pod "coredns-7db6d8ff4d-qlsw9" in "kube-system" namespace has status "Ready":"False"
	I0513 22:25:36.604596   14020 pod_ready.go:102] pod "coredns-7db6d8ff4d-qlsw9" in "kube-system" namespace has status "Ready":"False"
	I0513 22:25:38.970506   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:38.970506   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:38.970506   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:39.062867   14020 pod_ready.go:102] pod "coredns-7db6d8ff4d-qlsw9" in "kube-system" namespace has status "Ready":"False"
	I0513 22:25:39.193223   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:39.193223   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:39.193223   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:39.273230   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:39.273230   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:39.273230   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:39.277224   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:39.277224   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:39.277224   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:39.288888   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:39.289215   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:39.289215   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:39.296221   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:39.297213   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:39.300223   14020 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0513 22:25:39.304093   14020 out.go:177]   - Using image docker.io/busybox:stable
	I0513 22:25:39.341176   14020 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0513 22:25:39.341351   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0513 22:25:39.341351   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:39.358223   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:39.358223   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:39.358551   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:39.679003   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:39.679003   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:39.680001   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:39.732640   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:39.732640   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:39.732855   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:39.736839   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:39.736839   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:39.736946   14020 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 22:25:39.736946   14020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 22:25:39.737049   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:40.142651   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:40.142651   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:40.142651   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:40.348681   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:40.348681   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:40.348681   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:40.350798   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:40.350798   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:40.350798   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:40.360882   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:40.360882   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:40.360882   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:40.837226   14020 pod_ready.go:92] pod "coredns-7db6d8ff4d-qlsw9" in "kube-system" namespace has status "Ready":"True"
	I0513 22:25:40.837226   14020 pod_ready.go:81] duration metric: took 10.8748104s for pod "coredns-7db6d8ff4d-qlsw9" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:40.837226   14020 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zrfkl" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:40.953802   14020 pod_ready.go:92] pod "coredns-7db6d8ff4d-zrfkl" in "kube-system" namespace has status "Ready":"True"
	I0513 22:25:40.953802   14020 pod_ready.go:81] duration metric: took 116.5743ms for pod "coredns-7db6d8ff4d-zrfkl" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:40.953802   14020 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-596400" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:41.149422   14020 pod_ready.go:92] pod "etcd-addons-596400" in "kube-system" namespace has status "Ready":"True"
	I0513 22:25:41.149422   14020 pod_ready.go:81] duration metric: took 195.6173ms for pod "etcd-addons-596400" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:41.149422   14020 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-596400" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:41.285583   14020 pod_ready.go:92] pod "kube-apiserver-addons-596400" in "kube-system" namespace has status "Ready":"True"
	I0513 22:25:41.285583   14020 pod_ready.go:81] duration metric: took 136.1597ms for pod "kube-apiserver-addons-596400" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:41.285583   14020 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-596400" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:41.390139   14020 pod_ready.go:92] pod "kube-controller-manager-addons-596400" in "kube-system" namespace has status "Ready":"True"
	I0513 22:25:41.390139   14020 pod_ready.go:81] duration metric: took 104.5537ms for pod "kube-controller-manager-addons-596400" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:41.390139   14020 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mv4p2" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:41.578247   14020 pod_ready.go:92] pod "kube-proxy-mv4p2" in "kube-system" namespace has status "Ready":"True"
	I0513 22:25:41.578247   14020 pod_ready.go:81] duration metric: took 188.1057ms for pod "kube-proxy-mv4p2" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:41.578247   14020 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-596400" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:41.670612   14020 pod_ready.go:92] pod "kube-scheduler-addons-596400" in "kube-system" namespace has status "Ready":"True"
	I0513 22:25:41.671673   14020 pod_ready.go:81] duration metric: took 93.4244ms for pod "kube-scheduler-addons-596400" in "kube-system" namespace to be "Ready" ...
	I0513 22:25:41.671673   14020 pod_ready.go:38] duration metric: took 11.7402467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 22:25:41.671673   14020 api_server.go:52] waiting for apiserver process to appear ...
	I0513 22:25:41.689675   14020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 22:25:41.852677   14020 api_server.go:72] duration metric: took 14.5492237s to wait for apiserver process to appear ...
	I0513 22:25:41.852677   14020 api_server.go:88] waiting for apiserver healthz status ...
	I0513 22:25:41.852677   14020 api_server.go:253] Checking apiserver healthz at https://172.23.108.148:8443/healthz ...
	I0513 22:25:41.949665   14020 api_server.go:279] https://172.23.108.148:8443/healthz returned 200:
	ok
	I0513 22:25:41.950676   14020 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0513 22:25:41.950676   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:42.005680   14020 api_server.go:141] control plane version: v1.30.0
	I0513 22:25:42.005680   14020 api_server.go:131] duration metric: took 153.0015ms to wait for apiserver health ...
	I0513 22:25:42.005680   14020 system_pods.go:43] waiting for kube-system pods to appear ...
	I0513 22:25:42.052403   14020 system_pods.go:59] 7 kube-system pods found
	I0513 22:25:42.052403   14020 system_pods.go:61] "coredns-7db6d8ff4d-qlsw9" [5f7a1dc3-e958-44b4-8382-0a669ec7c6ec] Running
	I0513 22:25:42.052403   14020 system_pods.go:61] "coredns-7db6d8ff4d-zrfkl" [d148b229-587b-48b3-bdda-765d85fd9669] Running
	I0513 22:25:42.052403   14020 system_pods.go:61] "etcd-addons-596400" [7ff84122-fc19-4c35-bc89-fcece6b2aacf] Running
	I0513 22:25:42.052403   14020 system_pods.go:61] "kube-apiserver-addons-596400" [03d124c4-9e30-411f-a130-26d1da1bc8e2] Running
	I0513 22:25:42.052403   14020 system_pods.go:61] "kube-controller-manager-addons-596400" [a089bb55-a242-464c-a3db-9ad798c4dd28] Running
	I0513 22:25:42.052403   14020 system_pods.go:61] "kube-proxy-mv4p2" [668eab45-f5b2-4711-abff-18e25d76ec0d] Running
	I0513 22:25:42.052403   14020 system_pods.go:61] "kube-scheduler-addons-596400" [017213ea-4921-4dc6-aa66-3e33b62764ab] Running
	I0513 22:25:42.052403   14020 system_pods.go:74] duration metric: took 46.7223ms to wait for pod list to return data ...
	I0513 22:25:42.052403   14020 default_sa.go:34] waiting for default service account to be created ...
	I0513 22:25:42.067404   14020 default_sa.go:45] found service account: "default"
	I0513 22:25:42.067404   14020 default_sa.go:55] duration metric: took 15.0005ms for default service account to be created ...
	I0513 22:25:42.067404   14020 system_pods.go:116] waiting for k8s-apps to be running ...
	I0513 22:25:42.081644   14020 system_pods.go:86] 7 kube-system pods found
	I0513 22:25:42.081644   14020 system_pods.go:89] "coredns-7db6d8ff4d-qlsw9" [5f7a1dc3-e958-44b4-8382-0a669ec7c6ec] Running
	I0513 22:25:42.081644   14020 system_pods.go:89] "coredns-7db6d8ff4d-zrfkl" [d148b229-587b-48b3-bdda-765d85fd9669] Running
	I0513 22:25:42.081644   14020 system_pods.go:89] "etcd-addons-596400" [7ff84122-fc19-4c35-bc89-fcece6b2aacf] Running
	I0513 22:25:42.081644   14020 system_pods.go:89] "kube-apiserver-addons-596400" [03d124c4-9e30-411f-a130-26d1da1bc8e2] Running
	I0513 22:25:42.081644   14020 system_pods.go:89] "kube-controller-manager-addons-596400" [a089bb55-a242-464c-a3db-9ad798c4dd28] Running
	I0513 22:25:42.081644   14020 system_pods.go:89] "kube-proxy-mv4p2" [668eab45-f5b2-4711-abff-18e25d76ec0d] Running
	I0513 22:25:42.081644   14020 system_pods.go:89] "kube-scheduler-addons-596400" [017213ea-4921-4dc6-aa66-3e33b62764ab] Running
	I0513 22:25:42.081644   14020 system_pods.go:126] duration metric: took 14.2396ms to wait for k8s-apps to be running ...
	I0513 22:25:42.081644   14020 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 22:25:42.096406   14020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 22:25:42.204018   14020 system_svc.go:56] duration metric: took 122.3719ms WaitForService to wait for kubelet
	I0513 22:25:42.205026   14020 kubeadm.go:576] duration metric: took 14.9015676s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 22:25:42.205026   14020 node_conditions.go:102] verifying NodePressure condition ...
	I0513 22:25:42.226006   14020 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 22:25:42.226006   14020 node_conditions.go:123] node cpu capacity is 2
	I0513 22:25:42.226006   14020 node_conditions.go:105] duration metric: took 20.9796ms to run NodePressure ...
	I0513 22:25:42.226006   14020 start.go:240] waiting for startup goroutines ...
	I0513 22:25:44.818526   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:44.818526   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:44.818668   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:45.188620   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:45.188620   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:45.189621   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:45.256634   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:45.256634   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:45.256634   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:45.291662   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:45.293168   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:45.294647   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:45.501403   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:45.501403   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:45.501524   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:45.626542   14020 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0513 22:25:45.626542   14020 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0513 22:25:45.645358   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:45.645906   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:45.645906   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:45.743076   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:45.743076   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:45.744075   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:45.751074   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0513 22:25:45.832180   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:45.832180   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:45.832180   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:45.845548   14020 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0513 22:25:45.845548   14020 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0513 22:25:45.939354   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:45.939354   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:45.940345   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:46.014730   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:46.014806   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:46.014806   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:46.076597   14020 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0513 22:25:46.076597   14020 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0513 22:25:46.090186   14020 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0513 22:25:46.090186   14020 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0513 22:25:46.127702   14020 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0513 22:25:46.128221   14020 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0513 22:25:46.242257   14020 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0513 22:25:46.242257   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0513 22:25:46.422008   14020 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0513 22:25:46.422066   14020 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0513 22:25:46.426371   14020 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0513 22:25:46.426371   14020 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0513 22:25:46.447478   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0513 22:25:46.448897   14020 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0513 22:25:46.448897   14020 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0513 22:25:46.486714   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0513 22:25:46.517881   14020 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0513 22:25:46.517943   14020 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0513 22:25:46.539258   14020 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0513 22:25:46.539258   14020 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0513 22:25:46.617320   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:46.617320   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:46.618144   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:46.640989   14020 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0513 22:25:46.640989   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0513 22:25:46.669724   14020 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0513 22:25:46.669816   14020 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0513 22:25:46.699408   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:46.699408   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:46.699408   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:46.757379   14020 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0513 22:25:46.757379   14020 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0513 22:25:46.763181   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:46.763181   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:46.763887   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:46.805133   14020 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0513 22:25:46.805133   14020 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0513 22:25:46.816276   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:46.816276   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:46.816556   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:46.845015   14020 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0513 22:25:46.845015   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0513 22:25:46.846008   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0513 22:25:46.847014   14020 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0513 22:25:46.847014   14020 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0513 22:25:46.959564   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0513 22:25:46.993920   14020 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0513 22:25:46.994018   14020 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0513 22:25:47.006847   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0513 22:25:47.062223   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:47.062276   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:47.062276   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:47.066895   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.3158016s)
	I0513 22:25:47.192546   14020 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0513 22:25:47.192633   14020 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0513 22:25:47.232225   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0513 22:25:47.307585   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 22:25:47.375105   14020 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0513 22:25:47.375139   14020 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0513 22:25:47.412761   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0513 22:25:47.524620   14020 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0513 22:25:47.524620   14020 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0513 22:25:47.609052   14020 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0513 22:25:47.609109   14020 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0513 22:25:47.729767   14020 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0513 22:25:47.729827   14020 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0513 22:25:47.774978   14020 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0513 22:25:47.774978   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0513 22:25:47.871731   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:47.871731   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:47.872626   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:47.929234   14020 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0513 22:25:47.929234   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0513 22:25:47.981013   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0513 22:25:48.233200   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:48.233200   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:48.233365   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:48.268821   14020 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0513 22:25:48.268821   14020 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0513 22:25:48.331327   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0513 22:25:48.651742   14020 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0513 22:25:48.651820   14020 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0513 22:25:48.895491   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0513 22:25:49.064839   14020 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0513 22:25:49.064943   14020 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0513 22:25:49.221564   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 22:25:49.564817   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:49.564817   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:49.565546   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:49.647037   14020 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0513 22:25:49.647116   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0513 22:25:50.476833   14020 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0513 22:25:50.476893   14020 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0513 22:25:51.101738   14020 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0513 22:25:51.178798   14020 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0513 22:25:51.179795   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0513 22:25:51.834677   14020 addons.go:234] Setting addon gcp-auth=true in "addons-596400"
	I0513 22:25:51.834850   14020 host.go:66] Checking if "addons-596400" exists ...
	I0513 22:25:51.842748   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:51.927387   14020 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0513 22:25:51.927489   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0513 22:25:52.450665   14020 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0513 22:25:52.450710   14020 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0513 22:25:52.684978   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0513 22:25:53.938357   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:53.938514   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:53.948065   14020 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0513 22:25:53.948065   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-596400 ).state
	I0513 22:25:55.982408   14020 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:25:55.982408   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:55.982408   14020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-596400 ).networkadapters[0]).ipaddresses[0]
	I0513 22:25:57.789685   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.3420393s)
	I0513 22:25:57.789801   14020 addons.go:470] Verifying addon ingress=true in "addons-596400"
	I0513 22:25:57.789857   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.3029197s)
	I0513 22:25:57.793176   14020 out.go:177] * Verifying ingress addon...
	I0513 22:25:57.789960   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.9437897s)
	I0513 22:25:57.790184   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.8304606s)
	I0513 22:25:57.790355   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.7832923s)
	I0513 22:25:57.790454   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.5580735s)
	I0513 22:25:57.790556   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.4828159s)
	I0513 22:25:57.790605   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.3771435s)
	I0513 22:25:57.790711   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.8095527s)
	I0513 22:25:57.790855   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.4593369s)
	I0513 22:25:57.790906   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.8952835s)
	I0513 22:25:57.790906   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.5692157s)
	W0513 22:25:57.793176   14020 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0513 22:25:57.795561   14020 addons.go:470] Verifying addon metrics-server=true in "addons-596400"
	I0513 22:25:57.795561   14020 addons.go:470] Verifying addon registry=true in "addons-596400"
	I0513 22:25:57.800462   14020 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-596400 service yakd-dashboard -n yakd-dashboard
	
	I0513 22:25:57.795561   14020 retry.go:31] will retry after 219.714026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0513 22:25:57.798245   14020 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0513 22:25:57.802382   14020 out.go:177] * Verifying registry addon...
	I0513 22:25:57.806328   14020 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0513 22:25:57.821403   14020 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0513 22:25:57.821434   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:25:57.831001   14020 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0513 22:25:57.831035   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0513 22:25:57.881265   14020 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0513 22:25:58.033878   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0513 22:25:58.334556   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:25:58.334689   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:25:58.460323   14020 main.go:141] libmachine: [stdout =====>] : 172.23.108.148
	
	I0513 22:25:58.460323   14020 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:25:58.460969   14020 sshutil.go:53] new ssh client: &{IP:172.23.108.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-596400\id_rsa Username:docker}
	I0513 22:25:58.736939   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.0517858s)
	I0513 22:25:58.737010   14020 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-596400"
	I0513 22:25:58.741841   14020 out.go:177] * Verifying csi-hostpath-driver addon...
	I0513 22:25:58.745840   14020 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0513 22:25:58.824863   14020 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0513 22:25:58.824863   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:25:58.846041   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:25:58.868329   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:25:59.258309   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:25:59.319970   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:25:59.322975   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:25:59.768367   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:25:59.814091   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:25:59.818125   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:25:59.975216   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.941309s)
	I0513 22:25:59.975216   14020 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.0270615s)
	I0513 22:25:59.979191   14020 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0513 22:25:59.982203   14020 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0513 22:25:59.984207   14020 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0513 22:25:59.984207   14020 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0513 22:26:00.027323   14020 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0513 22:26:00.027391   14020 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0513 22:26:00.064970   14020 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0513 22:26:00.064970   14020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0513 22:26:00.109076   14020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0513 22:26:00.267065   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:00.320117   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:00.325731   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:00.764515   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:00.809587   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:00.814786   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:01.311774   14020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.2026528s)
	I0513 22:26:01.316529   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:01.321062   14020 addons.go:470] Verifying addon gcp-auth=true in "addons-596400"
	I0513 22:26:01.324663   14020 out.go:177] * Verifying gcp-auth addon...
	I0513 22:26:01.329657   14020 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0513 22:26:01.380083   14020 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0513 22:26:01.381092   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:01.381092   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:01.381092   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:01.760761   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:01.825621   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:01.828264   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:01.835608   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:02.254166   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:02.320920   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:02.324049   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:02.364572   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:02.760681   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:02.824615   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:02.826872   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:02.835243   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:03.253681   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:03.317506   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:03.318067   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:03.345848   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:03.760585   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:03.821835   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:03.823847   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:03.835203   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:04.255401   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:04.317063   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:04.317906   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:04.346977   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:04.759000   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:04.820568   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:04.820568   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:04.833487   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:05.268219   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:05.308374   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:05.312687   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:05.349412   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:05.758241   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:05.821822   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:05.821822   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:05.833681   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:06.270527   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:06.314961   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:06.319379   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:06.341950   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:06.761638   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:06.823895   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:06.824439   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:06.835562   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:07.252960   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:07.317088   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:07.318674   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:07.346018   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:07.765115   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:07.811280   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:07.815786   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:07.837841   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:08.267003   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:08.311774   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:08.316491   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:08.340989   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:08.916886   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:08.925488   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:08.925765   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:08.932361   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:09.265107   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:09.312333   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:09.312839   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:09.340165   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:09.759195   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:09.821206   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:09.821445   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:09.847999   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:10.999980   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:11.001251   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:11.003746   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:11.006770   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:11.012216   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:11.012877   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:11.014879   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:11.017899   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:11.329846   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:11.335201   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:11.336294   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:11.336294   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:11.754177   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:11.816428   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:11.817944   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:11.845987   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:12.259333   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:12.321701   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:12.323666   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:12.336839   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:12.772037   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:12.817281   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:12.817281   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:12.843506   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:13.297755   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:13.321377   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:13.321838   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:13.335372   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:13.768467   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:13.812996   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:13.816918   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:13.843074   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:14.262652   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:14.322976   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:14.323400   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:14.335928   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:14.765783   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:14.810742   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:14.814896   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:14.841443   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:15.258343   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:15.320026   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:15.321921   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:15.333703   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:15.766010   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:15.812821   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:15.817532   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:15.841108   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:16.260650   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:16.320930   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:16.321113   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:16.348766   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:16.764739   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:16.810064   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:16.818201   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:16.840413   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:17.267723   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:17.313805   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:17.313805   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:17.342815   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:17.761940   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:17.822488   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:17.822488   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:17.836487   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:18.571742   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:18.573552   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:18.574229   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:18.577390   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:18.768918   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:18.818404   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:18.819135   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:18.844199   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:19.255023   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:19.314416   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:19.316704   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:19.345171   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:19.759755   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:19.824222   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:19.824222   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:19.847546   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:20.268426   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:20.315435   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:20.319009   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:20.341638   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:20.755575   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:20.816948   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:20.819138   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:20.846069   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:21.263321   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:21.310266   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:21.313270   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:21.339275   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:21.757228   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:21.818735   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:21.824532   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:21.848305   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:22.267413   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:22.310805   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:22.318724   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:22.363272   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:22.766675   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:22.810482   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:22.816243   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:22.838918   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:23.257465   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:23.317334   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:23.321328   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:23.346520   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:23.763934   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:23.812203   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:23.816331   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:23.839936   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:24.260327   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:24.320838   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:24.321892   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:24.348078   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:24.763052   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:24.808902   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:24.812966   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:24.838729   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:25.257459   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:25.319211   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:25.323618   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:25.347914   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:25.764743   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:25.812246   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:25.815518   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:25.841258   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:26.260742   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:26.320206   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:26.320206   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:26.349053   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:26.762213   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:26.809218   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:26.813664   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:26.839340   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:27.256480   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:27.318497   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:27.320639   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:27.348454   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:27.767990   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:27.812472   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:27.813034   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:27.840751   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:28.258823   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:28.322023   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:28.325600   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:28.348295   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:28.763241   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:28.808522   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:28.812006   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:28.837529   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:29.255633   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:29.320321   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:29.321472   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:29.347590   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:29.765324   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:29.810077   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:29.814418   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:29.839429   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:30.254967   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:30.316921   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:30.316921   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:30.345452   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:30.764817   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:30.813179   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:30.818757   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:30.840073   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:31.255462   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:31.314541   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:31.315188   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:31.344887   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:31.761459   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:31.821889   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:31.823907   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:31.834568   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:32.270724   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:32.315298   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:32.317179   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:32.415105   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:32.756289   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:32.819442   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:32.819442   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:32.848569   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:33.266949   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:33.315009   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:33.318646   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:33.341354   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:33.755378   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:33.820808   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:33.821538   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:33.847802   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:34.266258   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:34.309896   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:34.314131   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:34.340599   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:34.760044   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:34.818432   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:34.820935   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:34.847356   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:35.260948   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:35.321679   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:35.321815   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:35.350185   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:35.856611   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:35.857945   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:35.860650   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:35.863038   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:36.574148   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:36.577075   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:36.577281   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:36.580967   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:36.763222   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:36.820486   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:36.822701   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:36.834859   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:37.264423   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:37.310068   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:37.313607   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:37.339222   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:37.756302   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:37.820764   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:37.820839   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:37.849274   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:38.260429   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:38.324207   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:38.327017   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:38.336547   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:38.760981   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:38.823315   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:38.823841   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:38.836366   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:39.256381   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:39.317555   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:39.318154   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:39.346321   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:39.762382   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:39.823819   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:39.825866   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:39.836142   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:40.371039   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:40.373619   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:40.377233   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:40.379520   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:40.758406   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:40.819178   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:40.819299   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:40.849405   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:41.259558   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:41.321586   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:41.321900   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:41.349611   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:41.761503   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:41.825016   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:41.825016   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:41.837820   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:42.261831   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:42.316174   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:42.317156   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:42.348064   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:42.764926   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:42.817320   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:42.825118   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:42.852328   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:43.269357   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:43.310103   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:43.314429   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:43.338758   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:43.755251   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:43.818585   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:43.819589   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:43.849194   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:44.263999   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:44.309952   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:44.314536   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:44.338610   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:44.920345   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:44.921962   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:44.922509   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:44.926615   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:45.259995   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:45.327829   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:45.329693   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:45.464406   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:45.778891   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:45.807903   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:45.817088   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:45.838669   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:46.258266   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:46.319840   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:46.320843   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:46.349585   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:46.768956   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:46.814788   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:46.818537   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:46.841351   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:47.259646   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:47.320930   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:47.323939   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:47.349861   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:47.765551   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:47.813019   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:47.847075   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:47.853480   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:48.255390   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:48.318527   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:48.319488   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:48.347147   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:48.795099   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:48.811146   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:48.815369   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:48.842270   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:49.266340   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:49.312028   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:49.316050   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:49.339953   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:49.764593   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:49.823910   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:49.823975   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:49.836049   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:50.271155   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:50.314739   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:50.315391   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:50.343133   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:50.762945   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:50.825123   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:50.826479   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:50.835993   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:51.266623   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:51.312359   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:51.312602   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:51.341504   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:51.757423   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:51.822304   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:51.822447   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:51.849226   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:52.267007   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:52.310548   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:52.314372   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:52.339489   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:52.757585   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:52.819920   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:52.819920   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:52.846654   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:53.265082   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:53.325187   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:53.326621   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:53.337198   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:53.770105   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:53.815331   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:53.815862   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:53.843901   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:54.260277   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:54.321443   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:54.321443   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:54.336664   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:54.765107   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:54.811030   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:54.815591   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:54.840703   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:55.258265   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:55.319333   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:55.322372   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:55.349739   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:55.765785   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:55.812720   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:55.813881   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:55.840795   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:56.259079   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:56.323455   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:56.325081   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:56.334797   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:56.767161   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:56.813589   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:56.821089   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:56.843838   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:57.259116   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:57.318983   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:57.319597   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:57.348894   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:57.762378   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:57.823154   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:57.824832   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:57.837055   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:58.271220   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:58.318764   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:58.318970   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:58.343380   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:58.760756   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:58.820873   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:58.820873   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:58.834977   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:59.272706   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:59.312364   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:59.313345   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:59.344061   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:26:59.769199   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:26:59.813278   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:26:59.813595   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:26:59.843717   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:01.055174   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:01.056547   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:01.056547   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:27:01.060349   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:01.063096   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:01.066355   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:27:01.066547   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:01.069431   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:01.272471   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:01.320756   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:27:01.321841   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:01.346801   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:01.756916   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:01.817379   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:01.817926   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0513 22:27:01.846073   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:02.257727   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:02.321961   14020 kapi.go:107] duration metric: took 1m4.514669s to wait for kubernetes.io/minikube-addons=registry ...
	I0513 22:27:02.322244   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:02.348941   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:02.762919   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:02.822905   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:02.836171   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:03.254111   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:03.316573   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:03.347364   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:03.766374   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:03.808841   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:03.839394   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:04.257665   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:04.318684   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:04.349749   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:04.767074   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:04.810798   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:04.840988   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:05.256448   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:05.318516   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:05.346970   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:05.763541   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:05.810320   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:05.840875   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:06.257506   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:06.317473   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:06.348588   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:06.763982   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:06.810137   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:06.839071   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:07.416160   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:07.417022   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:07.421248   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:08.158900   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:08.161213   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:08.161367   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:08.259763   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:08.320662   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:08.350225   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:08.772746   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:08.814541   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:08.846410   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:09.259654   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:09.323247   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:09.335917   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:09.769208   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:09.814198   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:09.845229   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:10.259050   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:10.322310   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:10.336111   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:10.763376   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:10.826025   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:10.838334   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:11.270339   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:11.314682   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:11.345612   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:11.760559   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:11.822892   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:11.836547   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:12.266433   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:12.313431   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:12.344030   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:12.760475   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:12.822624   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:12.835298   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:13.464755   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:13.465280   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:13.468753   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:13.815027   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:13.816786   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:13.845842   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:14.259648   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:14.320725   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:14.355306   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:14.763411   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:14.827077   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:14.837619   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:15.270470   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:15.317736   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:15.345137   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:15.762813   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:15.809973   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:15.839270   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:16.258201   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:16.321737   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:16.348898   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:16.769038   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:16.812986   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:16.844120   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:17.257638   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:17.316474   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:17.345152   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:17.761522   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:17.822254   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:17.835991   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:18.269108   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:18.314316   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:18.343489   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:18.757947   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:18.820561   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:18.833881   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:19.266905   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:19.312147   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:19.343277   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:19.958902   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:19.958902   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:19.958902   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:20.266984   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:20.312620   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:20.342309   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:20.761002   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:20.821240   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:20.850203   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:21.264806   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:21.323203   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:21.336659   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:21.761651   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:21.822879   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:21.835714   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:22.263278   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:22.711277   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:22.712150   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:22.762219   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:22.820626   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:22.834193   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:23.267685   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:23.312502   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:23.342810   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:23.762800   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:23.816846   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:23.846424   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:24.264149   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:24.312896   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:24.340342   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:24.764331   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:24.821508   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:24.835293   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:25.268136   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:25.312654   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:25.341867   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:25.756921   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:25.817909   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:25.850243   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:26.276634   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:26.343261   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:26.348904   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:26.756411   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:26.819344   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:26.869875   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:27.260619   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:27.322109   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:27.350900   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:27.766285   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:27.812505   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:27.841997   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:28.257607   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:28.318815   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:28.357671   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:28.761657   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:28.821470   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:28.834501   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:29.269263   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:29.315259   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:29.344906   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:29.764270   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:29.823207   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:29.837141   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:30.272125   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:30.325309   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:30.717312   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:30.886306   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:30.907115   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:30.910099   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:31.258971   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:31.320378   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:31.349486   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:31.760934   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:31.821832   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:31.850191   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:32.265563   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:32.310579   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:32.347670   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:32.757185   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:32.818471   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:32.846682   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:33.419606   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:33.420684   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:33.424408   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:33.769604   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:33.813389   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:33.843232   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:34.254356   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:34.315456   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:34.348056   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:34.760064   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:34.819988   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:34.850214   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:35.269137   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:35.313149   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:35.342871   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:35.760277   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:35.825140   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:35.836356   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:36.268216   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:36.314988   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:36.614136   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:36.917685   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:36.920693   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:36.925533   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:37.263314   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:37.321290   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:37.336579   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:37.756641   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:37.817390   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:37.849014   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:38.265502   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:38.313727   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:38.343301   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:38.758183   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:38.820991   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:38.849994   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:39.264216   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:39.310477   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:39.340174   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:39.756786   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:39.820422   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:39.849804   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:40.264333   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:40.312537   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:40.341459   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:40.760503   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:40.821092   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:40.848390   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:41.260826   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:41.320251   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:41.334766   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:41.768739   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:41.818537   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:41.843480   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:42.259279   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:42.321205   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:42.336237   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:42.766441   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:42.813837   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:42.843030   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:43.458079   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:43.458967   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:43.460619   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:43.758898   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:43.819699   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:43.849927   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:44.266096   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:44.310092   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:44.338736   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:44.767124   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:44.812499   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:44.843251   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:45.258045   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:45.319477   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:45.352592   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:45.776260   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:45.809219   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:45.840686   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:46.268751   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:46.312905   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:46.341955   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:46.766751   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:46.837046   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:46.844763   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:47.261104   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:47.323007   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:47.348016   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:47.763583   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:47.810880   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:47.840618   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:48.267847   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:48.322483   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:48.336766   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:48.809053   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:48.814650   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:48.850308   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:49.272058   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:49.311132   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:49.340923   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:49.758290   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:49.818691   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:49.846437   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:50.260390   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:50.322089   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:50.337493   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:50.771306   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:50.820077   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:50.845246   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:51.261246   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:51.321246   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:51.336256   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:51.954412   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:51.956445   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:51.957276   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:52.268867   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:52.315806   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:52.343099   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:52.758768   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0513 22:27:52.819594   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:52.846571   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:53.260457   14020 kapi.go:107] duration metric: took 1m54.5128956s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0513 22:27:53.322013   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:53.349915   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:53.812485   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:53.836242   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:54.319958   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:54.350153   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:54.825353   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:54.838291   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:55.323866   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:55.346271   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:55.829213   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:55.837834   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:56.325007   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:56.338670   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:56.826056   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:56.839115   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:57.311760   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:57.341339   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:57.812936   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:57.843016   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:58.312733   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:58.340512   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:58.823873   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:58.836460   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:59.310770   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:59.335662   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:27:59.822598   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:27:59.850793   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:00.321466   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:00.335441   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:00.821358   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:00.849222   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:01.318950   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:01.348255   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:01.820552   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:01.850423   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:02.317713   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:02.359956   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:02.818359   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:02.848077   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:03.320917   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:03.349353   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:03.823579   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:03.835569   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:04.321766   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:04.350393   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:04.820891   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:04.849434   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:05.321361   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:05.351581   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:05.820049   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:05.848067   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:06.321874   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:06.351252   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:06.821337   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:06.849423   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:07.320223   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:07.350215   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:07.819773   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:07.848113   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:08.318817   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:08.347489   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:08.818921   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:08.846184   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:09.316004   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:09.345013   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:09.813632   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:09.842258   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:10.313592   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:10.342393   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:10.825051   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:10.837400   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:11.324297   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:11.337770   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:11.811897   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:11.840331   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:12.312602   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:12.340366   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:12.825924   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:12.838240   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:13.324476   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:13.337442   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:13.814551   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:13.843558   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:14.310820   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:14.340092   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:14.825620   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:14.837905   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:15.316556   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:15.342608   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:15.816912   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:15.846065   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:16.310883   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:16.336671   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:16.814348   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:16.844627   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:17.323716   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:17.337757   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:17.817900   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:17.847955   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:18.312035   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:18.338873   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:18.819747   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:18.849813   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:19.313176   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:19.340801   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:19.819010   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:19.849587   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:20.405068   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:20.406139   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:20.816057   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:20.845565   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:21.318471   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:21.348417   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:21.813491   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:21.839838   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:22.314630   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:22.343789   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:22.823206   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:22.835886   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:23.314683   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:23.343020   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:23.821879   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:23.849580   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:24.313432   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:24.342787   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:24.821650   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:24.849973   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:25.364324   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:25.364324   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:25.820659   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:25.849717   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:26.313304   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:26.343677   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:26.822613   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:26.835072   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:27.325570   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:27.347634   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:27.826876   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:27.838964   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:28.325992   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:28.338837   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:29.108623   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:29.110435   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:29.359565   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:29.360354   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:29.818038   14020 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0513 22:28:29.847967   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:30.329481   14020 kapi.go:107] duration metric: took 2m32.5289351s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0513 22:28:30.341253   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:30.850696   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:31.354677   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:31.913401   14020 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0513 22:28:32.368803   14020 kapi.go:107] duration metric: took 2m31.0368666s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0513 22:28:32.372388   14020 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-596400 cluster.
	I0513 22:28:32.374399   14020 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0513 22:28:32.379675   14020 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0513 22:28:32.386690   14020 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, helm-tiller, metrics-server, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0513 22:28:32.390710   14020 addons.go:505] duration metric: took 3m5.0846892s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget helm-tiller metrics-server ingress-dns yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0513 22:28:32.390710   14020 start.go:245] waiting for cluster config update ...
	I0513 22:28:32.390710   14020 start.go:254] writing updated cluster config ...
	I0513 22:28:32.401596   14020 ssh_runner.go:195] Run: rm -f paused
	I0513 22:28:32.647326   14020 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0513 22:28:32.650253   14020 out.go:177] * Done! kubectl is now configured to use "addons-596400" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 13 22:29:13 addons-596400 dockerd[1328]: time="2024-05-13T22:29:13.286795307Z" level=info msg="cleaning up dead shim" namespace=moby
	May 13 22:29:13 addons-596400 dockerd[1328]: time="2024-05-13T22:29:13.475162150Z" level=info msg="shim disconnected" id=1282e4f13af5606c034e8973db3fcfc6c6bafd6ddc4534e3e492387fdbcea839 namespace=moby
	May 13 22:29:13 addons-596400 dockerd[1322]: time="2024-05-13T22:29:13.475460563Z" level=info msg="ignoring event" container=1282e4f13af5606c034e8973db3fcfc6c6bafd6ddc4534e3e492387fdbcea839 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 13 22:29:13 addons-596400 dockerd[1328]: time="2024-05-13T22:29:13.476663515Z" level=warning msg="cleaning up after shim disconnected" id=1282e4f13af5606c034e8973db3fcfc6c6bafd6ddc4534e3e492387fdbcea839 namespace=moby
	May 13 22:29:13 addons-596400 dockerd[1328]: time="2024-05-13T22:29:13.476691416Z" level=info msg="cleaning up dead shim" namespace=moby
	May 13 22:29:13 addons-596400 dockerd[1328]: time="2024-05-13T22:29:13.500349039Z" level=warning msg="cleanup warnings time=\"2024-05-13T22:29:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 13 22:29:17 addons-596400 cri-dockerd[1232]: time="2024-05-13T22:29:17Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a"
	May 13 22:29:17 addons-596400 dockerd[1328]: time="2024-05-13T22:29:17.985800138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:29:17 addons-596400 dockerd[1328]: time="2024-05-13T22:29:17.986043248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:29:17 addons-596400 dockerd[1328]: time="2024-05-13T22:29:17.986205354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:29:17 addons-596400 dockerd[1328]: time="2024-05-13T22:29:17.986454964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:29:19 addons-596400 dockerd[1322]: time="2024-05-13T22:29:19.118887726Z" level=info msg="ignoring event" container=3b4925974820c995c6fc641acd3bf22282a6bb6331a18d6ec84f04316126498d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 13 22:29:19 addons-596400 dockerd[1328]: time="2024-05-13T22:29:19.122138457Z" level=info msg="shim disconnected" id=3b4925974820c995c6fc641acd3bf22282a6bb6331a18d6ec84f04316126498d namespace=moby
	May 13 22:29:19 addons-596400 dockerd[1328]: time="2024-05-13T22:29:19.122585875Z" level=warning msg="cleaning up after shim disconnected" id=3b4925974820c995c6fc641acd3bf22282a6bb6331a18d6ec84f04316126498d namespace=moby
	May 13 22:29:19 addons-596400 dockerd[1328]: time="2024-05-13T22:29:19.122716780Z" level=info msg="cleaning up dead shim" namespace=moby
	May 13 22:29:19 addons-596400 dockerd[1328]: time="2024-05-13T22:29:19.369436943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:29:19 addons-596400 dockerd[1328]: time="2024-05-13T22:29:19.369649552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:29:19 addons-596400 dockerd[1328]: time="2024-05-13T22:29:19.369739855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:29:19 addons-596400 dockerd[1328]: time="2024-05-13T22:29:19.370081369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:29:19 addons-596400 cri-dockerd[1232]: time="2024-05-13T22:29:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/46b13c60b3fbb3253a07350725c6dd4c351d90e2497eebcffa427b4556526529/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 13 22:29:20 addons-596400 cri-dockerd[1232]: time="2024-05-13T22:29:20Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	May 13 22:29:20 addons-596400 dockerd[1328]: time="2024-05-13T22:29:20.438547787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:29:20 addons-596400 dockerd[1328]: time="2024-05-13T22:29:20.438719294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:29:20 addons-596400 dockerd[1328]: time="2024-05-13T22:29:20.438735095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:29:20 addons-596400 dockerd[1328]: time="2024-05-13T22:29:20.439020306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	dd23db616cf85       nginx@sha256:32e76d4f34f80e479964a0fbd4c5b4f6967b5322c8d004e9cf0cb81c93510766                                                                3 seconds ago        Running             task-pv-container                        0                   46b13c60b3fbb       task-pv-pod-restore
	3b4925974820c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:08d39eb6f0f6a1d5492b87ab5042ec3f8fc0ad82bfe65a7548d25c1944b1698a                            6 seconds ago        Exited              gadget                                   4                   45fdf28a10565       gadget-6jp6z
	46358aa214cc6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 52 seconds ago       Running             gcp-auth                                 0                   17e8de70f6c7f       gcp-auth-5db96cd9b4-n8h4x
	411e01c0ba456       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             56 seconds ago       Running             controller                               0                   9c4fbab17e128       ingress-nginx-controller-768f948f8f-8p5wn
	8ed47700e3284       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   f42dde4b2f7be       csi-hostpathplugin-r4j54
	35679fd04cb20       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   f42dde4b2f7be       csi-hostpathplugin-r4j54
	60aa3e306f95e       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   f42dde4b2f7be       csi-hostpathplugin-r4j54
	f8c51a29330cf       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   f42dde4b2f7be       csi-hostpathplugin-r4j54
	d58ed25ed6b65       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   f42dde4b2f7be       csi-hostpathplugin-r4j54
	6dad529865277       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   bf419dc834e2d       csi-hostpath-attacher-0
	589cd0ccde8e3       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   b805479c1610b       csi-hostpath-resizer-0
	934d0a0f7e002       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   f42dde4b2f7be       csi-hostpathplugin-r4j54
	fa82beccf6bd5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   About a minute ago   Exited              patch                                    0                   47ca730ee222a       ingress-nginx-admission-patch-jr2sp
	e41898731b514       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   About a minute ago   Exited              create                                   0                   513a19035020b       ingress-nginx-admission-create-8vmt8
	99b69935e615c       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   a442ff25e809e       snapshot-controller-745499f584-gdn2s
	2a08354cbf47a       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   93637d58963b6       yakd-dashboard-5ddbf7d777-8hpfz
	02774f48eb18d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   644a38b9727a3       snapshot-controller-745499f584-t5jmp
	750ae2856223c       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   e13ae601118cb       local-path-provisioner-8d985888d-4w7w5
	4af90242aaaab       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   0843bfa8097a3       tiller-deploy-6677d64bcd-42bzp
	66deca31752e2       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        2 minutes ago        Running             metrics-server                           0                   a33f45b8dc5b7       metrics-server-c59844bb4-frvlc
	dd76993933af9       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   5238ad7eda8b1       kube-ingress-dns-minikube
	a442844ad60df       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   c9028e0466acc       nvidia-device-plugin-daemonset-cnb25
	82c23b831bd6b       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   88fa826128e71       storage-provisioner
	1fc347c3c303c       cbb01a7bd410d                                                                                                                                3 minutes ago        Running             coredns                                  0                   ceb477c71010b       coredns-7db6d8ff4d-qlsw9
	beb345428e4f0       a0bf559e280cf                                                                                                                                3 minutes ago        Running             kube-proxy                               0                   bfaa4a2e96414       kube-proxy-mv4p2
	aacc40a388f06       3861cfcd7c04c                                                                                                                                4 minutes ago        Running             etcd                                     0                   8cd63cbcf01dc       etcd-addons-596400
	8a9b01e5d1dfa       c42f13656d0b2                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   5d13dc5b55528       kube-apiserver-addons-596400
	451d48692bffd       c7aad43836fa5                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   05df9f2f1269b       kube-controller-manager-addons-596400
	dea135d30e6fe       259c8277fcbbc                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   f7e6ab6cfc7fe       kube-scheduler-addons-596400
	
	
	==> controller_ingress [411e01c0ba45] <==
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	W0513 22:28:29.762073       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0513 22:28:29.762416       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0513 22:28:29.769837       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.0" state="clean" commit="7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a" platform="linux/amd64"
	I0513 22:28:30.154963       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0513 22:28:30.183841       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0513 22:28:30.225697       7 nginx.go:264] "Starting NGINX Ingress controller"
	I0513 22:28:30.256750       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"bfd7f309-244b-485e-9ee0-b2ac89de586a", APIVersion:"v1", ResourceVersion:"719", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0513 22:28:30.265137       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"968897da-7e36-4773-bb4c-2a8b39faa694", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0513 22:28:30.265173       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"ffac1043-4abb-49ac-a2f7-31219cb4e477", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0513 22:28:31.429346       7 nginx.go:307] "Starting NGINX process"
	I0513 22:28:31.429640       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0513 22:28:31.429992       7 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0513 22:28:31.430869       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0513 22:28:31.456871       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0513 22:28:31.457469       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-8p5wn"
	I0513 22:28:31.468863       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-8p5wn" node="addons-596400"
	I0513 22:28:31.523344       7 controller.go:210] "Backend successfully reloaded"
	I0513 22:28:31.523419       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0513 22:28:31.523688       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-8p5wn", UID:"c3e669e0-b1d0-414c-a41b-e314468719cd", APIVersion:"v1", ResourceVersion:"756", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [1fc347c3c303] <==
	[INFO] 10.244.0.9:40823 - 28098 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000305209s
	[INFO] 10.244.0.9:40880 - 1124 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126204s
	[INFO] 10.244.0.9:40880 - 18275 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00067572s
	[INFO] 10.244.0.9:53665 - 25375 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000179505s
	[INFO] 10.244.0.9:53665 - 40450 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092403s
	[INFO] 10.244.0.9:33063 - 37799 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093903s
	[INFO] 10.244.0.9:33063 - 30884 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000384812s
	[INFO] 10.244.0.9:38863 - 61150 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000182905s
	[INFO] 10.244.0.9:38863 - 44497 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000224706s
	[INFO] 10.244.0.9:60507 - 25649 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060001s
	[INFO] 10.244.0.9:60507 - 54078 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131604s
	[INFO] 10.244.0.9:60178 - 17291 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116904s
	[INFO] 10.244.0.9:60178 - 23945 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074702s
	[INFO] 10.244.0.9:34183 - 26406 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000133204s
	[INFO] 10.244.0.9:34183 - 7460 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000221006s
	[INFO] 10.244.0.22:45185 - 36795 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000242109s
	[INFO] 10.244.0.22:59916 - 35378 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154506s
	[INFO] 10.244.0.22:44127 - 26396 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112505s
	[INFO] 10.244.0.22:54192 - 29541 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131405s
	[INFO] 10.244.0.22:40023 - 53395 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111804s
	[INFO] 10.244.0.22:47480 - 22748 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114705s
	[INFO] 10.244.0.22:39788 - 30440 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.004106158s
	[INFO] 10.244.0.22:57282 - 5528 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.00414196s
	[INFO] 10.244.0.25:58179 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000237409s
	[INFO] 10.244.0.25:50876 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127806s
	
	
	==> describe nodes <==
	Name:               addons-596400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-596400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=addons-596400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_13T22_25_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-596400
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-596400"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 22:25:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-596400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 22:29:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 22:29:18 +0000   Mon, 13 May 2024 22:25:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 22:29:18 +0000   Mon, 13 May 2024 22:25:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 22:29:18 +0000   Mon, 13 May 2024 22:25:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 22:29:18 +0000   Mon, 13 May 2024 22:25:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.108.148
	  Hostname:    addons-596400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 3bfecd35a21b4769897542172f924eee
	  System UUID:                03613fb9-3702-3543-b4f6-fee50b3644e5
	  Boot ID:                    3acd68fa-93df-48a1-b85d-e44fd09f8421
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  gadget                      gadget-6jp6z                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  gcp-auth                    gcp-auth-5db96cd9b4-n8h4x                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-8p5wn    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m27s
	  kube-system                 coredns-7db6d8ff4d-qlsw9                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m57s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 csi-hostpathplugin-r4j54                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 etcd-addons-596400                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-apiserver-addons-596400                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-controller-manager-addons-596400        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 kube-proxy-mv4p2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-addons-596400                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 metrics-server-c59844bb4-frvlc               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3m30s
	  kube-system                 nvidia-device-plugin-daemonset-cnb25         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 snapshot-controller-745499f584-gdn2s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 snapshot-controller-745499f584-t5jmp         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 tiller-deploy-6677d64bcd-42bzp               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  local-path-storage          local-path-provisioner-8d985888d-4w7w5       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-8hpfz              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node addons-596400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node addons-596400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node addons-596400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s                  kubelet          Node addons-596400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s                  kubelet          Node addons-596400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s                  kubelet          Node addons-596400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m9s                   kubelet          Node addons-596400 status is now: NodeReady
	  Normal  RegisteredNode           3m58s                  node-controller  Node addons-596400 event: Registered Node addons-596400 in Controller
	
	
	==> dmesg <==
	[ +14.387822] systemd-fstab-generator[2338]: Ignoring "noauto" option for root device
	[  +0.533524] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.231593] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.227171] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.877358] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.867571] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.056353] kauditd_printk_skb: 108 callbacks suppressed
	[May13 22:26] kauditd_printk_skb: 103 callbacks suppressed
	[ +33.905768] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.266606] kauditd_printk_skb: 4 callbacks suppressed
	[May13 22:27] kauditd_printk_skb: 29 callbacks suppressed
	[ +14.041832] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.063866] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.534452] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.897187] kauditd_printk_skb: 16 callbacks suppressed
	[May13 22:28] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.937022] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.394288] kauditd_printk_skb: 52 callbacks suppressed
	[  +6.107726] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.014204] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.232176] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.131246] kauditd_printk_skb: 26 callbacks suppressed
	[May13 22:29] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.028674] kauditd_printk_skb: 19 callbacks suppressed
	[  +8.943056] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [aacc40a388f0] <==
	{"level":"warn","ts":"2024-05-13T22:28:56.647041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.028327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/pvc-da9206f4-c917-4595-b5c0-874e94c44c3c\" ","response":"range_response_count:1 size:1262"}
	{"level":"info","ts":"2024-05-13T22:28:56.647064Z","caller":"traceutil/trace.go:171","msg":"trace[1010149620] range","detail":"{range_begin:/registry/persistentvolumes/pvc-da9206f4-c917-4595-b5c0-874e94c44c3c; range_end:; response_count:1; response_revision:1436; }","duration":"182.077829ms","start":"2024-05-13T22:28:56.464978Z","end":"2024-05-13T22:28:56.647056Z","steps":["trace[1010149620] 'range keys from in-memory index tree'  (duration: 181.909122ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:29:04.457921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.429797ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11370339412861819222 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/registry-proxy-h8d79\" mod_revision:1013 > success:<request_put:<key:\"/registry/pods/kube-system/registry-proxy-h8d79\" value_size:3953 >> failure:<request_range:<key:\"/registry/pods/kube-system/registry-proxy-h8d79\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-05-13T22:29:04.458067Z","caller":"traceutil/trace.go:171","msg":"trace[223920651] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1472; }","duration":"275.766626ms","start":"2024-05-13T22:29:04.18229Z","end":"2024-05-13T22:29:04.458056Z","steps":["trace[223920651] 'process raft request'  (duration: 275.718324ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T22:29:04.45914Z","caller":"traceutil/trace.go:171","msg":"trace[100435566] transaction","detail":"{read_only:false; response_revision:1471; number_of_response:1; }","duration":"277.142185ms","start":"2024-05-13T22:29:04.181983Z","end":"2024-05-13T22:29:04.459125Z","steps":["trace[100435566] 'process raft request'  (duration: 12.441834ms)","trace[100435566] 'compare'  (duration: 263.141684ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T22:29:04.459393Z","caller":"traceutil/trace.go:171","msg":"trace[1448090192] linearizableReadLoop","detail":"{readStateIndex:1548; appliedIndex:1547; }","duration":"277.169486ms","start":"2024-05-13T22:29:04.182216Z","end":"2024-05-13T22:29:04.459385Z","steps":["trace[1448090192] 'read index received'  (duration: 12.721746ms)","trace[1448090192] 'applied index is now lower than readState.Index'  (duration: 264.44704ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-13T22:29:04.459459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.234189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-13T22:29:04.459479Z","caller":"traceutil/trace.go:171","msg":"trace[1235027586] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:1472; }","duration":"277.27329ms","start":"2024-05-13T22:29:04.1822Z","end":"2024-05-13T22:29:04.459473Z","steps":["trace[1235027586] 'agreement among raft nodes before linearized reading'  (duration: 277.229788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:29:05.266127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.740373ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-13T22:29:05.266234Z","caller":"traceutil/trace.go:171","msg":"trace[357413392] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1472; }","duration":"155.89998ms","start":"2024-05-13T22:29:05.110322Z","end":"2024-05-13T22:29:05.266222Z","steps":["trace[357413392] 'range keys from in-memory index tree'  (duration: 155.725173ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:29:05.271381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"599.28439ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11370339412861819225 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/registry-proxy-h8d79.17cf2ce4d984cb64\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/registry-proxy-h8d79.17cf2ce4d984cb64\" value_size:651 lease:2146967376007042684 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-05-13T22:29:05.27145Z","caller":"traceutil/trace.go:171","msg":"trace[1655082724] linearizableReadLoop","detail":"{readStateIndex:1550; appliedIndex:1549; }","duration":"797.641296ms","start":"2024-05-13T22:29:04.473795Z","end":"2024-05-13T22:29:05.271436Z","steps":["trace[1655082724] 'read index received'  (duration: 195.711593ms)","trace[1655082724] 'applied index is now lower than readState.Index'  (duration: 601.928503ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-13T22:29:05.271688Z","caller":"traceutil/trace.go:171","msg":"trace[1835037872] transaction","detail":"{read_only:false; response_revision:1473; number_of_response:1; }","duration":"797.989111ms","start":"2024-05-13T22:29:04.473687Z","end":"2024-05-13T22:29:05.271676Z","steps":["trace[1835037872] 'process raft request'  (duration: 195.862599ms)","trace[1835037872] 'compare'  (duration: 599.163085ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-13T22:29:05.271739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T22:29:04.473674Z","time spent":"798.037013ms","remote":"127.0.0.1:42742","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":735,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/registry-proxy-h8d79.17cf2ce4d984cb64\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/registry-proxy-h8d79.17cf2ce4d984cb64\" value_size:651 lease:2146967376007042684 >> failure:<>"}
	{"level":"warn","ts":"2024-05-13T22:29:05.271941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"798.152818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/registry-proxy-h8d79\" ","response":"range_response_count:1 size:4023"}
	{"level":"info","ts":"2024-05-13T22:29:05.27196Z","caller":"traceutil/trace.go:171","msg":"trace[1190485713] range","detail":"{range_begin:/registry/pods/kube-system/registry-proxy-h8d79; range_end:; response_count:1; response_revision:1473; }","duration":"798.18912ms","start":"2024-05-13T22:29:04.473765Z","end":"2024-05-13T22:29:05.271954Z","steps":["trace[1190485713] 'agreement among raft nodes before linearized reading'  (duration: 798.073515ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:29:05.271978Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T22:29:04.47376Z","time spent":"798.21372ms","remote":"127.0.0.1:42836","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":4047,"request content":"key:\"/registry/pods/kube-system/registry-proxy-h8d79\" "}
	{"level":"warn","ts":"2024-05-13T22:29:05.272099Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"587.270475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:3179"}
	{"level":"info","ts":"2024-05-13T22:29:05.272115Z","caller":"traceutil/trace.go:171","msg":"trace[1325723427] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:1473; }","duration":"587.314576ms","start":"2024-05-13T22:29:04.684795Z","end":"2024-05-13T22:29:05.27211Z","steps":["trace[1325723427] 'agreement among raft nodes before linearized reading'  (duration: 587.259174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:29:05.272129Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T22:29:04.684782Z","time spent":"587.343578ms","remote":"127.0.0.1:42836","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":3203,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-05-13T22:29:05.272475Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.599282ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-05-13T22:29:05.272501Z","caller":"traceutil/trace.go:171","msg":"trace[1539822037] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1473; }","duration":"160.658384ms","start":"2024-05-13T22:29:05.111835Z","end":"2024-05-13T22:29:05.272494Z","steps":["trace[1539822037] 'agreement among raft nodes before linearized reading'  (duration: 160.500077ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:29:05.272697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"543.042578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-05-13T22:29:05.272732Z","caller":"traceutil/trace.go:171","msg":"trace[963269825] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1473; }","duration":"543.103281ms","start":"2024-05-13T22:29:04.729624Z","end":"2024-05-13T22:29:05.272727Z","steps":["trace[963269825] 'agreement among raft nodes before linearized reading'  (duration: 543.028678ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T22:29:05.27275Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T22:29:04.729611Z","time spent":"543.133782ms","remote":"127.0.0.1:42944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":522,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	
	
	==> gcp-auth [46358aa214cc] <==
	2024/05/13 22:28:32 GCP Auth Webhook started!
	2024/05/13 22:28:33 Ready to marshal response ...
	2024/05/13 22:28:33 Ready to write response ...
	2024/05/13 22:28:33 Ready to marshal response ...
	2024/05/13 22:28:33 Ready to write response ...
	2024/05/13 22:28:43 Ready to marshal response ...
	2024/05/13 22:28:43 Ready to write response ...
	2024/05/13 22:28:50 Ready to marshal response ...
	2024/05/13 22:28:50 Ready to write response ...
	2024/05/13 22:28:54 Ready to marshal response ...
	2024/05/13 22:28:54 Ready to write response ...
	2024/05/13 22:28:59 Ready to marshal response ...
	2024/05/13 22:28:59 Ready to write response ...
	2024/05/13 22:29:18 Ready to marshal response ...
	2024/05/13 22:29:18 Ready to write response ...
	
	
	==> kernel <==
	 22:29:24 up 6 min,  0 users,  load average: 3.04, 2.30, 1.05
	Linux addons-596400 5.10.207 #1 SMP Thu May 9 02:07:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8a9b01e5d1df] <==
	Trace[1468920542]: [713.783949ms] [713.783949ms] END
	I0513 22:27:01.147800       1 trace.go:236] Trace[199192952]: "List" accept:application/json, */*,audit-id:ac5465ab-570a-4699-9f86-0e43aba309c7,client:172.23.96.1,api-group:,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (13-May-2024 22:27:00.403) (total time: 744ms):
	Trace[199192952]: ["List(recursive=true) etcd3" audit-id:ac5465ab-570a-4699-9f86-0e43aba309c7,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 744ms (22:27:00.403)]
	Trace[199192952]: [744.656455ms] [744.656455ms] END
	I0513 22:27:01.149514       1 trace.go:236] Trace[1850077661]: "List" accept:application/json, */*,audit-id:44849654-f024-49ac-b3c1-7144e780f7bf,client:172.23.96.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (13-May-2024 22:27:00.402) (total time: 746ms):
	Trace[1850077661]: ["List(recursive=true) etcd3" audit-id:44849654-f024-49ac-b3c1-7144e780f7bf,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 746ms (22:27:00.402)]
	Trace[1850077661]: [746.908721ms] [746.908721ms] END
	I0513 22:27:01.154066       1 trace.go:236] Trace[1959640485]: "List" accept:application/json, */*,audit-id:55df3ee0-d697-47c5-8a6b-8e9ef5154544,client:172.23.96.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (13-May-2024 22:27:00.355) (total time: 798ms):
	Trace[1959640485]: ["List(recursive=true) etcd3" audit-id:55df3ee0-d697-47c5-8a6b-8e9ef5154544,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 798ms (22:27:00.355)]
	Trace[1959640485]: [798.230628ms] [798.230628ms] END
	I0513 22:29:05.276993       1 trace.go:236] Trace[1590379631]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9e6e2201-bef8-484d-861b-84647e346ede,client:172.23.108.148,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:POST (13-May-2024 22:29:04.471) (total time: 805ms):
	Trace[1590379631]: ["Create etcd3" audit-id:9e6e2201-bef8-484d-861b-84647e346ede,key:/events/kube-system/registry-proxy-h8d79.17cf2ce4d984cb64,type:*core.Event,resource:events 804ms (22:29:04.472)
	Trace[1590379631]:  ---"Txn call succeeded" 804ms (22:29:05.276)]
	Trace[1590379631]: [805.086816ms] [805.086816ms] END
	I0513 22:29:05.277030       1 trace.go:236] Trace[1468120887]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d8ac13e5-e633-4552-8b5d-7611c293f4b1,client:172.23.108.148,api-group:,api-version:v1,name:registry-proxy-h8d79,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/registry-proxy-h8d79,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:GET (13-May-2024 22:29:04.471) (total time: 805ms):
	Trace[1468120887]: ---"About to write a response" 805ms (22:29:05.276)
	Trace[1468120887]: [805.531435ms] [805.531435ms] END
	I0513 22:29:05.277854       1 trace.go:236] Trace[98835995]: "List" accept:application/json, */*,audit-id:5ce8c5ba-de28-4e58-85b9-8ce883bc4524,client:172.23.96.1,api-group:,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (13-May-2024 22:29:04.684) (total time: 593ms):
	Trace[98835995]: ["List(recursive=true) etcd3" audit-id:5ce8c5ba-de28-4e58-85b9-8ce883bc4524,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: 593ms (22:29:04.684)]
	Trace[98835995]: [593.776653ms] [593.776653ms] END
	I0513 22:29:05.284836       1 trace.go:236] Trace[1099247099]: "Get" accept:application/json, */*,audit-id:ca5c056f-0ed3-4d82-b088-a4c7e485c153,client:10.244.0.11,api-group:coordination.k8s.io,api-version:v1,name:snapshot-controller-leader,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/snapshot-controller-leader,user-agent:snapshot-controller/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (13-May-2024 22:29:04.728) (total time: 556ms):
	Trace[1099247099]: ---"About to write a response" 556ms (22:29:05.284)
	Trace[1099247099]: [556.107137ms] [556.107137ms] END
	E0513 22:29:09.359422       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 172.23.108.148:8443->10.244.0.28:35726: read: connection reset by peer
	I0513 22:29:11.863509       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [451d48692bff] <==
	I0513 22:27:38.638815       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0513 22:27:40.280057       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0513 22:27:40.344883       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0513 22:27:40.695078       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0513 22:27:40.761512       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0513 22:27:41.295284       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0513 22:27:41.308929       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0513 22:27:41.320336       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0513 22:27:41.359671       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0513 22:27:41.387083       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0513 22:27:41.398380       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0513 22:27:45.823330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="9.327255ms"
	I0513 22:27:45.824150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="186.107µs"
	I0513 22:28:11.020588       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0513 22:28:11.027209       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0513 22:28:11.092601       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0513 22:28:11.095815       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0513 22:28:30.362157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="53.802µs"
	I0513 22:28:32.462207       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="21.233021ms"
	I0513 22:28:32.462627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="291.311µs"
	I0513 22:28:45.614907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="43.133935ms"
	I0513 22:28:45.615054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="71.403µs"
	I0513 22:28:53.426359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-6fcd4f6f98" duration="7.2µs"
	I0513 22:29:04.031746       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="11.501µs"
	I0513 22:29:10.735395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-8d985888d" duration="7.5µs"
	
	
	==> kube-proxy [beb345428e4f] <==
	I0513 22:25:35.472506       1 server_linux.go:69] "Using iptables proxy"
	I0513 22:25:35.513657       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.108.148"]
	I0513 22:25:35.715195       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0513 22:25:35.715442       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0513 22:25:35.715480       1 server_linux.go:165] "Using iptables Proxier"
	I0513 22:25:35.731148       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 22:25:35.732202       1 server.go:872] "Version info" version="v1.30.0"
	I0513 22:25:35.732300       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 22:25:35.734033       1 config.go:192] "Starting service config controller"
	I0513 22:25:35.734153       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 22:25:35.734199       1 config.go:101] "Starting endpoint slice config controller"
	I0513 22:25:35.734208       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 22:25:35.735122       1 config.go:319] "Starting node config controller"
	I0513 22:25:35.735166       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 22:25:35.845629       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0513 22:25:35.875875       1 shared_informer.go:320] Caches are synced for service config
	I0513 22:25:35.939035       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dea135d30e6f] <==
	W0513 22:25:11.121393       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 22:25:11.122291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0513 22:25:11.161477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0513 22:25:11.162481       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0513 22:25:11.172636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 22:25:11.173007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0513 22:25:11.199191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0513 22:25:11.199466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0513 22:25:11.221852       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 22:25:11.222368       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 22:25:11.407475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 22:25:11.407719       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0513 22:25:11.432113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0513 22:25:11.432163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0513 22:25:11.522115       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0513 22:25:11.522352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0513 22:25:11.647872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0513 22:25:11.648166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0513 22:25:11.702988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0513 22:25:11.703031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0513 22:25:11.710174       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0513 22:25:11.710457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0513 22:25:11.750464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0513 22:25:11.750842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0513 22:25:14.129404       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 13 22:29:15 addons-596400 kubelet[2110]: I0513 22:29:15.538510    2110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff2d0f6-bfef-4313-b53d-b85bb7d1725a" path="/var/lib/kubelet/pods/7ff2d0f6-bfef-4313-b53d-b85bb7d1725a/volumes"
	May 13 22:29:17 addons-596400 kubelet[2110]: I0513 22:29:17.517160    2110 scope.go:117] "RemoveContainer" containerID="aacd70603e00dc4b0e56e1c004de8ed4416243350a3cb33a4ab5a07539460110"
	May 13 22:29:18 addons-596400 kubelet[2110]: I0513 22:29:18.544630    2110 topology_manager.go:215] "Topology Admit Handler" podUID="e027aa01-7ec7-464d-99e4-1a7324b0a40c" podNamespace="default" podName="task-pv-pod-restore"
	May 13 22:29:18 addons-596400 kubelet[2110]: E0513 22:29:18.544750    2110 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f9ec5856-bcdf-46bc-ba1e-99b369c17e30" containerName="registry"
	May 13 22:29:18 addons-596400 kubelet[2110]: E0513 22:29:18.544764    2110 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ff2d0f6-bfef-4313-b53d-b85bb7d1725a" containerName="task-pv-container"
	May 13 22:29:18 addons-596400 kubelet[2110]: E0513 22:29:18.544774    2110 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d53e7630-43a1-40c6-98ce-c03f26363d5d" containerName="registry-proxy"
	May 13 22:29:18 addons-596400 kubelet[2110]: E0513 22:29:18.544784    2110 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2716c29-aa11-4ee8-b676-4e15aef3291c" containerName="helm-test"
	May 13 22:29:18 addons-596400 kubelet[2110]: I0513 22:29:18.544825    2110 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff2d0f6-bfef-4313-b53d-b85bb7d1725a" containerName="task-pv-container"
	May 13 22:29:18 addons-596400 kubelet[2110]: I0513 22:29:18.544837    2110 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ec5856-bcdf-46bc-ba1e-99b369c17e30" containerName="registry"
	May 13 22:29:18 addons-596400 kubelet[2110]: I0513 22:29:18.544847    2110 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53e7630-43a1-40c6-98ce-c03f26363d5d" containerName="registry-proxy"
	May 13 22:29:18 addons-596400 kubelet[2110]: I0513 22:29:18.544858    2110 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2716c29-aa11-4ee8-b676-4e15aef3291c" containerName="helm-test"
	May 13 22:29:18 addons-596400 kubelet[2110]: I0513 22:29:18.733491    2110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e027aa01-7ec7-464d-99e4-1a7324b0a40c-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"e027aa01-7ec7-464d-99e4-1a7324b0a40c\") " pod="default/task-pv-pod-restore"
	May 13 22:29:18 addons-596400 kubelet[2110]: I0513 22:29:18.733735    2110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1659cf66-a37c-4f99-bf82-8e80258a1462\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3c26c9f3-1178-11ef-bf97-9a3f1c9dfe70\") pod \"task-pv-pod-restore\" (UID: \"e027aa01-7ec7-464d-99e4-1a7324b0a40c\") " pod="default/task-pv-pod-restore"
	May 13 22:29:18 addons-596400 kubelet[2110]: I0513 22:29:18.733784    2110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksggl\" (UniqueName: \"kubernetes.io/projected/e027aa01-7ec7-464d-99e4-1a7324b0a40c-kube-api-access-ksggl\") pod \"task-pv-pod-restore\" (UID: \"e027aa01-7ec7-464d-99e4-1a7324b0a40c\") " pod="default/task-pv-pod-restore"
	May 13 22:29:18 addons-596400 kubelet[2110]: I0513 22:29:18.842895    2110 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-1659cf66-a37c-4f99-bf82-8e80258a1462\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3c26c9f3-1178-11ef-bf97-9a3f1c9dfe70\") pod \"task-pv-pod-restore\" (UID: \"e027aa01-7ec7-464d-99e4-1a7324b0a40c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/ffff75b6ab7431d91444642018ac70b9adc7805a0c55cb5e9d425a3875e4b2dd/globalmount\"" pod="default/task-pv-pod-restore"
	May 13 22:29:19 addons-596400 kubelet[2110]: I0513 22:29:19.613697    2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46b13c60b3fbb3253a07350725c6dd4c351d90e2497eebcffa427b4556526529"
	May 13 22:29:19 addons-596400 kubelet[2110]: I0513 22:29:19.647178    2110 scope.go:117] "RemoveContainer" containerID="aacd70603e00dc4b0e56e1c004de8ed4416243350a3cb33a4ab5a07539460110"
	May 13 22:29:19 addons-596400 kubelet[2110]: I0513 22:29:19.647776    2110 scope.go:117] "RemoveContainer" containerID="3b4925974820c995c6fc641acd3bf22282a6bb6331a18d6ec84f04316126498d"
	May 13 22:29:19 addons-596400 kubelet[2110]: E0513 22:29:19.648326    2110 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-6jp6z_gadget(303f48b9-4487-4bcc-bd05-5595a7d68af2)\"" pod="gadget/gadget-6jp6z" podUID="303f48b9-4487-4bcc-bd05-5595a7d68af2"
	May 13 22:29:20 addons-596400 kubelet[2110]: E0513 22:29:20.589723    2110 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="3b4925974820c995c6fc641acd3bf22282a6bb6331a18d6ec84f04316126498d" cmd=["/bin/gadgettracermanager","-liveness"]
	May 13 22:29:20 addons-596400 kubelet[2110]: I0513 22:29:20.715089    2110 scope.go:117] "RemoveContainer" containerID="3b4925974820c995c6fc641acd3bf22282a6bb6331a18d6ec84f04316126498d"
	May 13 22:29:20 addons-596400 kubelet[2110]: E0513 22:29:20.717139    2110 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-6jp6z_gadget(303f48b9-4487-4bcc-bd05-5595a7d68af2)\"" pod="gadget/gadget-6jp6z" podUID="303f48b9-4487-4bcc-bd05-5595a7d68af2"
	May 13 22:29:20 addons-596400 kubelet[2110]: I0513 22:29:20.718456    2110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=2.209766358 podStartE2EDuration="2.71841496s" podCreationTimestamp="2024-05-13 22:29:18 +0000 UTC" firstStartedPulling="2024-05-13 22:29:19.752326806 +0000 UTC m=+246.488683822" lastFinishedPulling="2024-05-13 22:29:20.260975308 +0000 UTC m=+246.997332424" observedRunningTime="2024-05-13 22:29:20.716096865 +0000 UTC m=+247.452453881" watchObservedRunningTime="2024-05-13 22:29:20.71841496 +0000 UTC m=+247.454772076"
	May 13 22:29:21 addons-596400 kubelet[2110]: I0513 22:29:21.827009    2110 scope.go:117] "RemoveContainer" containerID="3b4925974820c995c6fc641acd3bf22282a6bb6331a18d6ec84f04316126498d"
	May 13 22:29:21 addons-596400 kubelet[2110]: E0513 22:29:21.827614    2110 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-6jp6z_gadget(303f48b9-4487-4bcc-bd05-5595a7d68af2)\"" pod="gadget/gadget-6jp6z" podUID="303f48b9-4487-4bcc-bd05-5595a7d68af2"
	
	
	==> storage-provisioner [82c23b831bd6] <==
	I0513 22:25:58.838960       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0513 22:25:58.895519       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0513 22:25:58.895573       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0513 22:25:58.952634       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0513 22:25:58.952790       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-596400_a53e38b9-c76c-4df1-ab94-f48aef5b3077!
	I0513 22:25:58.953823       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3e43d42-11cb-4777-8066-29fa8344c545", APIVersion:"v1", ResourceVersion:"834", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-596400_a53e38b9-c76c-4df1-ab94-f48aef5b3077 became leader
	I0513 22:25:59.053666       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-596400_a53e38b9-c76c-4df1-ab94-f48aef5b3077!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:29:16.606880    3932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-596400 -n addons-596400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-596400 -n addons-596400: (11.3128738s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-596400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-8vmt8 ingress-nginx-admission-patch-jr2sp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-596400 describe pod ingress-nginx-admission-create-8vmt8 ingress-nginx-admission-patch-jr2sp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-596400 describe pod ingress-nginx-admission-create-8vmt8 ingress-nginx-admission-patch-jr2sp: exit status 1 (152.0495ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8vmt8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jr2sp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-596400 describe pod ingress-nginx-admission-create-8vmt8 ingress-nginx-admission-patch-jr2sp: exit status 1
--- FAIL: TestAddons/parallel/Registry (64.44s)

                                                
                                    
x
+
TestErrorSpam/setup (180.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-457100 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 --driver=hyperv
E0513 22:33:32.731245    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:32.746226    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:32.761937    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:32.793388    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:32.840620    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:32.934938    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:33.109144    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:33.441187    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:34.088187    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:35.377183    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:37.938899    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:43.063688    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:33:53.304372    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:34:13.794190    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:34:54.764752    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-457100 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 --driver=hyperv: (3m0.3717708s)
error_spam_test.go:96: unexpected stderr: "W0513 22:32:54.639152   10684 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-457100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
- KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
- MINIKUBE_LOCATION=18872
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-457100" primary control-plane node in "nospam-457100" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-457100" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0513 22:32:54.639152   10684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (180.37s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (29.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-129600 -n functional-129600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-129600 -n functional-129600: (10.2579548s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 logs -n 25: (7.6281629s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-457100 --log_dir                                     | nospam-457100     | minikube5\jenkins | v1.33.1 | 13 May 24 22:36 UTC | 13 May 24 22:37 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-457100 --log_dir                                     | nospam-457100     | minikube5\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-457100 --log_dir                                     | nospam-457100     | minikube5\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-457100 --log_dir                                     | nospam-457100     | minikube5\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:37 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-457100 --log_dir                                     | nospam-457100     | minikube5\jenkins | v1.33.1 | 13 May 24 22:37 UTC | 13 May 24 22:38 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-457100 --log_dir                                     | nospam-457100     | minikube5\jenkins | v1.33.1 | 13 May 24 22:38 UTC | 13 May 24 22:38 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-457100 --log_dir                                     | nospam-457100     | minikube5\jenkins | v1.33.1 | 13 May 24 22:38 UTC | 13 May 24 22:38 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-457100                                            | nospam-457100     | minikube5\jenkins | v1.33.1 | 13 May 24 22:38 UTC | 13 May 24 22:38 UTC |
	| start   | -p functional-129600                                        | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:38 UTC | 13 May 24 22:41 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-129600                                        | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:41 UTC | 13 May 24 22:43 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-129600 cache add                                 | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:43 UTC | 13 May 24 22:43 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-129600 cache add                                 | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:43 UTC | 13 May 24 22:43 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-129600 cache add                                 | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:43 UTC | 13 May 24 22:43 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-129600 cache add                                 | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	|         | minikube-local-cache-test:functional-129600                 |                   |                   |         |                     |                     |
	| cache   | functional-129600 cache delete                              | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	|         | minikube-local-cache-test:functional-129600                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	| ssh     | functional-129600 ssh sudo                                  | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-129600                                           | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-129600 ssh                                       | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-129600 cache reload                              | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	| ssh     | functional-129600 ssh                                       | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-129600 kubectl --                                | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:44 UTC | 13 May 24 22:44 UTC |
	|         | --context functional-129600                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 22:41:42
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 22:41:42.560191   10004 out.go:291] Setting OutFile to fd 980 ...
	I0513 22:41:42.560776   10004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:41:42.560854   10004 out.go:304] Setting ErrFile to fd 960...
	I0513 22:41:42.560854   10004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:41:42.574817   10004 out.go:298] Setting JSON to false
	I0513 22:41:42.579263   10004 start.go:129] hostinfo: {"hostname":"minikube5","uptime":1666,"bootTime":1715638436,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0513 22:41:42.579263   10004 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:41:42.587143   10004 out.go:177] * [functional-129600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:41:42.592366   10004 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:41:42.592068   10004 notify.go:220] Checking for updates...
	I0513 22:41:42.594609   10004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 22:41:42.597068   10004 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0513 22:41:42.599897   10004 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 22:41:42.602654   10004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 22:41:42.605942   10004 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:41:42.606006   10004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:41:47.162485   10004 out.go:177] * Using the hyperv driver based on existing profile
	I0513 22:41:47.164653   10004 start.go:297] selected driver: hyperv
	I0513 22:41:47.164653   10004 start.go:901] validating driver "hyperv" against &{Name:functional-129600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-129600
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.96 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:41:47.165379   10004 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 22:41:47.206259   10004 cni.go:84] Creating CNI manager for ""
	I0513 22:41:47.206259   10004 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 22:41:47.206259   10004 start.go:340] cluster config:
	{Name:functional-129600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-129600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.96 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:41:47.206259   10004 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 22:41:47.209262   10004 out.go:177] * Starting "functional-129600" primary control-plane node in "functional-129600" cluster
	I0513 22:41:47.212038   10004 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:41:47.212038   10004 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 22:41:47.213423   10004 cache.go:56] Caching tarball of preloaded images
	I0513 22:41:47.213423   10004 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 22:41:47.213423   10004 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 22:41:47.214005   10004 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\config.json ...
	I0513 22:41:47.214271   10004 start.go:360] acquireMachinesLock for functional-129600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 22:41:47.215863   10004 start.go:364] duration metric: took 1.5925ms to acquireMachinesLock for "functional-129600"
	I0513 22:41:47.215863   10004 start.go:96] Skipping create...Using existing machine configuration
	I0513 22:41:47.215863   10004 fix.go:54] fixHost starting: 
	I0513 22:41:47.216481   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:41:49.575025   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:41:49.575025   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:41:49.575111   10004 fix.go:112] recreateIfNeeded on functional-129600: state=Running err=<nil>
	W0513 22:41:49.575126   10004 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 22:41:49.579211   10004 out.go:177] * Updating the running hyperv "functional-129600" VM ...
	I0513 22:41:49.581291   10004 machine.go:94] provisionDockerMachine start ...
	I0513 22:41:49.581291   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:41:51.442163   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:41:51.452222   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:41:51.452222   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:41:53.646536   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:41:53.646536   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:41:53.650312   10004 main.go:141] libmachine: Using SSH client type: native
	I0513 22:41:53.650797   10004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.96 22 <nil> <nil>}
	I0513 22:41:53.650797   10004 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 22:41:53.771990   10004 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-129600
	
	I0513 22:41:53.772539   10004 buildroot.go:166] provisioning hostname "functional-129600"
	I0513 22:41:53.772539   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:41:55.584438   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:41:55.594191   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:41:55.594191   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:41:57.808904   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:41:57.808904   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:41:57.814420   10004 main.go:141] libmachine: Using SSH client type: native
	I0513 22:41:57.814940   10004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.96 22 <nil> <nil>}
	I0513 22:41:57.815119   10004 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-129600 && echo "functional-129600" | sudo tee /etc/hostname
	I0513 22:41:57.963951   10004 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-129600
	
	I0513 22:41:57.963951   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:41:59.796521   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:41:59.796521   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:41:59.812220   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:02.026891   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:02.036583   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:02.041009   10004 main.go:141] libmachine: Using SSH client type: native
	I0513 22:42:02.041401   10004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.96 22 <nil> <nil>}
	I0513 22:42:02.041401   10004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-129600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-129600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-129600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 22:42:02.164224   10004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 22:42:02.164224   10004 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 22:42:02.164421   10004 buildroot.go:174] setting up certificates
	I0513 22:42:02.164421   10004 provision.go:84] configureAuth start
	I0513 22:42:02.164421   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:03.982204   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:03.982204   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:03.982279   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:06.159754   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:06.159754   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:06.169508   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:07.987429   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:07.997244   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:07.997244   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:10.141087   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:10.141087   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:10.141087   10004 provision.go:143] copyHostCerts
	I0513 22:42:10.150313   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0513 22:42:10.150621   10004 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0513 22:42:10.150621   10004 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0513 22:42:10.150824   10004 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 22:42:10.152099   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0513 22:42:10.152292   10004 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0513 22:42:10.152292   10004 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0513 22:42:10.152533   10004 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 22:42:10.153051   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0513 22:42:10.153051   10004 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0513 22:42:10.153051   10004 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0513 22:42:10.153051   10004 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 22:42:10.154019   10004 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-129600 san=[127.0.0.1 172.23.102.96 functional-129600 localhost minikube]
	I0513 22:42:10.474645   10004 provision.go:177] copyRemoteCerts
	I0513 22:42:10.492957   10004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 22:42:10.492957   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:12.308412   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:12.308412   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:12.308486   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:14.469351   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:14.469439   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:14.469552   10004 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
	I0513 22:42:14.565858   10004 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.07271s)
	I0513 22:42:14.565900   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 22:42:14.565900   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 22:42:14.605181   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 22:42:14.605602   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 22:42:14.637192   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 22:42:14.644204   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0513 22:42:14.681251   10004 provision.go:87] duration metric: took 12.5164716s to configureAuth
	I0513 22:42:14.681251   10004 buildroot.go:189] setting minikube options for container-runtime
	I0513 22:42:14.681251   10004 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:42:14.681780   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:16.460008   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:16.469712   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:16.469875   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:18.649431   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:18.649431   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:18.653106   10004 main.go:141] libmachine: Using SSH client type: native
	I0513 22:42:18.653106   10004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.96 22 <nil> <nil>}
	I0513 22:42:18.653106   10004 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 22:42:18.778318   10004 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 22:42:18.778318   10004 buildroot.go:70] root file system type: tmpfs
	I0513 22:42:18.778494   10004 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 22:42:18.778494   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:20.567389   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:20.567389   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:20.567500   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:22.768127   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:22.768127   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:22.773682   10004 main.go:141] libmachine: Using SSH client type: native
	I0513 22:42:22.774328   10004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.96 22 <nil> <nil>}
	I0513 22:42:22.774328   10004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 22:42:22.924540   10004 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 22:42:22.924659   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:24.742937   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:24.742937   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:24.742937   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:26.966301   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:26.966301   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:26.970133   10004 main.go:141] libmachine: Using SSH client type: native
	I0513 22:42:26.970576   10004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.96 22 <nil> <nil>}
	I0513 22:42:26.970576   10004 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 22:42:27.102024   10004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 22:42:27.102024   10004 machine.go:97] duration metric: took 37.5196539s to provisionDockerMachine
	I0513 22:42:27.102024   10004 start.go:293] postStartSetup for "functional-129600" (driver="hyperv")
	I0513 22:42:27.102176   10004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 22:42:27.110305   10004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 22:42:27.110305   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:28.895902   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:28.895902   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:28.904947   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:31.086327   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:31.086327   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:31.086855   10004 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
	I0513 22:42:31.173143   10004 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.0627202s)
	I0513 22:42:31.198987   10004 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 22:42:31.205559   10004 command_runner.go:130] > NAME=Buildroot
	I0513 22:42:31.205559   10004 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0513 22:42:31.205667   10004 command_runner.go:130] > ID=buildroot
	I0513 22:42:31.205667   10004 command_runner.go:130] > VERSION_ID=2023.02.9
	I0513 22:42:31.205667   10004 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0513 22:42:31.205714   10004 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 22:42:31.205714   10004 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 22:42:31.205714   10004 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 22:42:31.206255   10004 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0513 22:42:31.206255   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0513 22:42:31.207147   10004 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\5984\hosts -> hosts in /etc/test/nested/copy/5984
	I0513 22:42:31.207147   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\5984\hosts -> /etc/test/nested/copy/5984/hosts
	I0513 22:42:31.214242   10004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/5984
	I0513 22:42:31.231568   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0513 22:42:31.274057   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\5984\hosts --> /etc/test/nested/copy/5984/hosts (40 bytes)
	I0513 22:42:31.314156   10004 start.go:296] duration metric: took 4.2120107s for postStartSetup
	I0513 22:42:31.314156   10004 fix.go:56] duration metric: took 44.0970244s for fixHost
	I0513 22:42:31.314156   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:33.151364   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:33.151364   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:33.161961   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:35.285534   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:35.294993   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:35.299173   10004 main.go:141] libmachine: Using SSH client type: native
	I0513 22:42:35.299233   10004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.96 22 <nil> <nil>}
	I0513 22:42:35.299233   10004 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 22:42:35.415528   10004 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715640155.543355915
	
	I0513 22:42:35.415635   10004 fix.go:216] guest clock: 1715640155.543355915
	I0513 22:42:35.415635   10004 fix.go:229] Guest: 2024-05-13 22:42:35.543355915 +0000 UTC Remote: 2024-05-13 22:42:31.3141569 +0000 UTC m=+48.868132801 (delta=4.229199015s)
	I0513 22:42:35.415732   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:37.246654   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:37.256445   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:37.256528   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:39.470758   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:39.481087   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:39.484728   10004 main.go:141] libmachine: Using SSH client type: native
	I0513 22:42:39.485251   10004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.96 22 <nil> <nil>}
	I0513 22:42:39.485251   10004 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715640155
	I0513 22:42:39.612252   10004 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 22:42:35 UTC 2024
	
	I0513 22:42:39.612252   10004 fix.go:236] clock set: Mon May 13 22:42:35 UTC 2024
	 (err=<nil>)
	I0513 22:42:39.612252   10004 start.go:83] releasing machines lock for "functional-129600", held for 52.3948798s
	I0513 22:42:39.612252   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:41.462264   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:41.462264   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:41.462264   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:43.629310   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:43.629310   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:43.640921   10004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 22:42:43.641049   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:43.647011   10004 ssh_runner.go:195] Run: cat /version.json
	I0513 22:42:43.647011   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:42:45.559293   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:45.559293   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:45.566328   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:45.570183   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:42:45.570261   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:45.570370   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:42:47.857072   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:47.857072   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:47.867971   10004 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
	I0513 22:42:47.899687   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:42:47.902727   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:42:47.903071   10004 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
	I0513 22:42:48.018515   10004 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0513 22:42:48.018629   10004 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.3775494s)
	I0513 22:42:48.018699   10004 command_runner.go:130] > {"iso_version": "v1.33.1", "kicbase_version": "v0.0.43-1714992375-18804", "minikube_version": "v1.33.1", "commit": "d6e0d89dd5607476c1efbac5f05c928d4cd7ef53"}
	I0513 22:42:48.018699   10004 ssh_runner.go:235] Completed: cat /version.json: (4.3715614s)
	I0513 22:42:48.029263   10004 ssh_runner.go:195] Run: systemctl --version
	I0513 22:42:48.045084   10004 command_runner.go:130] > systemd 252 (252)
	I0513 22:42:48.045084   10004 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0513 22:42:48.053117   10004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0513 22:42:48.066279   10004 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0513 22:42:48.066370   10004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 22:42:48.074123   10004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 22:42:48.087953   10004 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0513 22:42:48.088003   10004 start.go:494] detecting cgroup driver to use...
	I0513 22:42:48.088030   10004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 22:42:48.122049   10004 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0513 22:42:48.130792   10004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 22:42:48.156826   10004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 22:42:48.176058   10004 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 22:42:48.185043   10004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 22:42:48.214636   10004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 22:42:48.238099   10004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 22:42:48.263681   10004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 22:42:48.288310   10004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 22:42:48.316635   10004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 22:42:48.342341   10004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 22:42:48.371163   10004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 22:42:48.398351   10004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 22:42:48.415709   10004 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0513 22:42:48.427090   10004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 22:42:48.450808   10004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:42:48.657341   10004 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 22:42:48.685449   10004 start.go:494] detecting cgroup driver to use...
	I0513 22:42:48.694670   10004 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 22:42:48.721676   10004 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0513 22:42:48.721676   10004 command_runner.go:130] > [Unit]
	I0513 22:42:48.721676   10004 command_runner.go:130] > Description=Docker Application Container Engine
	I0513 22:42:48.721676   10004 command_runner.go:130] > Documentation=https://docs.docker.com
	I0513 22:42:48.721676   10004 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0513 22:42:48.721676   10004 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0513 22:42:48.721676   10004 command_runner.go:130] > StartLimitBurst=3
	I0513 22:42:48.721676   10004 command_runner.go:130] > StartLimitIntervalSec=60
	I0513 22:42:48.721676   10004 command_runner.go:130] > [Service]
	I0513 22:42:48.721676   10004 command_runner.go:130] > Type=notify
	I0513 22:42:48.721676   10004 command_runner.go:130] > Restart=on-failure
	I0513 22:42:48.721676   10004 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0513 22:42:48.721676   10004 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0513 22:42:48.721676   10004 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0513 22:42:48.721676   10004 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0513 22:42:48.721676   10004 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0513 22:42:48.721676   10004 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0513 22:42:48.721676   10004 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0513 22:42:48.721676   10004 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0513 22:42:48.721676   10004 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0513 22:42:48.721676   10004 command_runner.go:130] > ExecStart=
	I0513 22:42:48.721676   10004 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0513 22:42:48.721676   10004 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0513 22:42:48.721676   10004 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0513 22:42:48.721676   10004 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0513 22:42:48.721676   10004 command_runner.go:130] > LimitNOFILE=infinity
	I0513 22:42:48.721676   10004 command_runner.go:130] > LimitNPROC=infinity
	I0513 22:42:48.721676   10004 command_runner.go:130] > LimitCORE=infinity
	I0513 22:42:48.722206   10004 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0513 22:42:48.722206   10004 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0513 22:42:48.722249   10004 command_runner.go:130] > TasksMax=infinity
	I0513 22:42:48.722249   10004 command_runner.go:130] > TimeoutStartSec=0
	I0513 22:42:48.722303   10004 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0513 22:42:48.722303   10004 command_runner.go:130] > Delegate=yes
	I0513 22:42:48.722339   10004 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0513 22:42:48.722339   10004 command_runner.go:130] > KillMode=process
	I0513 22:42:48.722372   10004 command_runner.go:130] > [Install]
	I0513 22:42:48.722372   10004 command_runner.go:130] > WantedBy=multi-user.target
	I0513 22:42:48.732248   10004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 22:42:48.765834   10004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 22:42:48.802853   10004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 22:42:48.832627   10004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 22:42:48.855391   10004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 22:42:48.873434   10004 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0513 22:42:48.895479   10004 ssh_runner.go:195] Run: which cri-dockerd
	I0513 22:42:48.898805   10004 command_runner.go:130] > /usr/bin/cri-dockerd
	I0513 22:42:48.910602   10004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 22:42:48.925404   10004 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 22:42:48.958390   10004 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 22:42:49.191845   10004 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 22:42:49.381136   10004 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 22:42:49.389139   10004 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 22:42:49.427524   10004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:42:49.636237   10004 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 22:43:02.409040   10004 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.7724339s)
	I0513 22:43:02.419761   10004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 22:43:02.459222   10004 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0513 22:43:02.498154   10004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 22:43:02.528407   10004 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 22:43:02.698637   10004 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 22:43:02.866469   10004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:43:03.051579   10004 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 22:43:03.085725   10004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 22:43:03.116017   10004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:43:03.290847   10004 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 22:43:03.402341   10004 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 22:43:03.412864   10004 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 22:43:03.416671   10004 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0513 22:43:03.416671   10004 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0513 22:43:03.416671   10004 command_runner.go:130] > Device: 0,22	Inode: 1436        Links: 1
	I0513 22:43:03.416671   10004 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0513 22:43:03.416671   10004 command_runner.go:130] > Access: 2024-05-13 22:43:03.442127298 +0000
	I0513 22:43:03.416671   10004 command_runner.go:130] > Modify: 2024-05-13 22:43:03.442127298 +0000
	I0513 22:43:03.416671   10004 command_runner.go:130] > Change: 2024-05-13 22:43:03.445127519 +0000
	I0513 22:43:03.416671   10004 command_runner.go:130] >  Birth: -
	I0513 22:43:03.424113   10004 start.go:562] Will wait 60s for crictl version
	I0513 22:43:03.434474   10004 ssh_runner.go:195] Run: which crictl
	I0513 22:43:03.440285   10004 command_runner.go:130] > /usr/bin/crictl
	I0513 22:43:03.452216   10004 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 22:43:03.511641   10004 command_runner.go:130] > Version:  0.1.0
	I0513 22:43:03.511641   10004 command_runner.go:130] > RuntimeName:  docker
	I0513 22:43:03.511641   10004 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0513 22:43:03.511641   10004 command_runner.go:130] > RuntimeApiVersion:  v1
	I0513 22:43:03.511641   10004 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 22:43:03.522105   10004 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 22:43:03.548662   10004 command_runner.go:130] > 26.0.2
	I0513 22:43:03.556415   10004 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 22:43:03.582527   10004 command_runner.go:130] > 26.0.2
	I0513 22:43:03.586162   10004 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 22:43:03.586162   10004 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 22:43:03.591542   10004 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 22:43:03.591542   10004 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 22:43:03.591542   10004 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 22:43:03.591542   10004 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 22:43:03.593214   10004 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 22:43:03.593214   10004 ip.go:210] interface addr: 172.23.96.1/20
	I0513 22:43:03.601929   10004 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 22:43:03.607459   10004 command_runner.go:130] > 172.23.96.1	host.minikube.internal
	I0513 22:43:03.607459   10004 kubeadm.go:877] updating cluster {Name:functional-129600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-129600 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.96 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0513 22:43:03.607459   10004 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:43:03.613656   10004 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 22:43:03.632328   10004 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0513 22:43:03.632328   10004 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0513 22:43:03.632328   10004 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0513 22:43:03.632328   10004 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0513 22:43:03.632415   10004 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0513 22:43:03.632415   10004 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0513 22:43:03.632415   10004 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0513 22:43:03.632415   10004 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 22:43:03.632476   10004 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 22:43:03.632476   10004 docker.go:615] Images already preloaded, skipping extraction
	I0513 22:43:03.643898   10004 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 22:43:03.662109   10004 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0513 22:43:03.662551   10004 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0513 22:43:03.662593   10004 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0513 22:43:03.662593   10004 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0513 22:43:03.662593   10004 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0513 22:43:03.662632   10004 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0513 22:43:03.662632   10004 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0513 22:43:03.662632   10004 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 22:43:03.663058   10004 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 22:43:03.663130   10004 cache_images.go:84] Images are preloaded, skipping loading
	I0513 22:43:03.663130   10004 kubeadm.go:928] updating node { 172.23.102.96 8441 v1.30.0 docker true true} ...
	I0513 22:43:03.663285   10004 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-129600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.102.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-129600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 22:43:03.670434   10004 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0513 22:43:03.691388   10004 command_runner.go:130] > cgroupfs
	I0513 22:43:03.696320   10004 cni.go:84] Creating CNI manager for ""
	I0513 22:43:03.696320   10004 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 22:43:03.696320   10004 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0513 22:43:03.696320   10004 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.102.96 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-129600 NodeName:functional-129600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.102.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.102.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0513 22:43:03.696320   10004 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.102.96
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-129600"
	  kubeletExtraArgs:
	    node-ip: 172.23.102.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.102.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0513 22:43:03.706122   10004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 22:43:03.722124   10004 command_runner.go:130] > kubeadm
	I0513 22:43:03.722221   10004 command_runner.go:130] > kubectl
	I0513 22:43:03.722264   10004 command_runner.go:130] > kubelet
	I0513 22:43:03.722304   10004 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 22:43:03.730476   10004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0513 22:43:03.744460   10004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0513 22:43:03.769251   10004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 22:43:03.794470   10004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0513 22:43:03.830009   10004 ssh_runner.go:195] Run: grep 172.23.102.96	control-plane.minikube.internal$ /etc/hosts
	I0513 22:43:03.835535   10004 command_runner.go:130] > 172.23.102.96	control-plane.minikube.internal
	I0513 22:43:03.844537   10004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:43:04.009942   10004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 22:43:04.032072   10004 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600 for IP: 172.23.102.96
	I0513 22:43:04.032072   10004 certs.go:194] generating shared ca certs ...
	I0513 22:43:04.032072   10004 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:43:04.032832   10004 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 22:43:04.033162   10004 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 22:43:04.033561   10004 certs.go:256] generating profile certs ...
	I0513 22:43:04.034809   10004 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.key
	I0513 22:43:04.035266   10004 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\apiserver.key.6baf9bfc
	I0513 22:43:04.035266   10004 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\proxy-client.key
	I0513 22:43:04.035266   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 22:43:04.035963   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 22:43:04.036199   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 22:43:04.036449   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 22:43:04.036736   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0513 22:43:04.037005   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0513 22:43:04.037144   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0513 22:43:04.037144   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0513 22:43:04.037750   10004 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0513 22:43:04.037750   10004 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0513 22:43:04.038398   10004 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 22:43:04.038485   10004 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 22:43:04.038485   10004 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 22:43:04.038485   10004 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 22:43:04.039104   10004 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0513 22:43:04.039104   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0513 22:43:04.039104   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0513 22:43:04.039104   10004 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:43:04.040215   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 22:43:04.076593   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 22:43:04.115479   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 22:43:04.152507   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 22:43:04.182723   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0513 22:43:04.226052   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 22:43:04.263019   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 22:43:04.298657   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0513 22:43:04.339132   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0513 22:43:04.377289   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0513 22:43:04.414983   10004 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 22:43:04.447326   10004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0513 22:43:04.486450   10004 ssh_runner.go:195] Run: openssl version
	I0513 22:43:04.490984   10004 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0513 22:43:04.505263   10004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0513 22:43:04.531342   10004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0513 22:43:04.533254   10004 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 22:43:04.533254   10004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 22:43:04.545602   10004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0513 22:43:04.553965   10004 command_runner.go:130] > 51391683
	I0513 22:43:04.562586   10004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0513 22:43:04.586767   10004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0513 22:43:04.612398   10004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0513 22:43:04.621397   10004 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 22:43:04.621397   10004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 22:43:04.631016   10004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0513 22:43:04.633887   10004 command_runner.go:130] > 3ec20f2e
	I0513 22:43:04.649396   10004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 22:43:04.673912   10004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 22:43:04.697695   10004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:43:04.701126   10004 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:43:04.704027   10004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:43:04.713520   10004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:43:04.715761   10004 command_runner.go:130] > b5213941
	I0513 22:43:04.729520   10004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 22:43:04.753651   10004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 22:43:04.758818   10004 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 22:43:04.760344   10004 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0513 22:43:04.760344   10004 command_runner.go:130] > Device: 8,1	Inode: 9431368     Links: 1
	I0513 22:43:04.760344   10004 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0513 22:43:04.760344   10004 command_runner.go:130] > Access: 2024-05-13 22:41:10.386347251 +0000
	I0513 22:43:04.760436   10004 command_runner.go:130] > Modify: 2024-05-13 22:41:10.386347251 +0000
	I0513 22:43:04.760436   10004 command_runner.go:130] > Change: 2024-05-13 22:41:10.386347251 +0000
	I0513 22:43:04.760436   10004 command_runner.go:130] >  Birth: 2024-05-13 22:41:10.386347251 +0000
	I0513 22:43:04.768258   10004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0513 22:43:04.773908   10004 command_runner.go:130] > Certificate will not expire
	I0513 22:43:04.784439   10004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0513 22:43:04.789807   10004 command_runner.go:130] > Certificate will not expire
	I0513 22:43:04.800437   10004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0513 22:43:04.809166   10004 command_runner.go:130] > Certificate will not expire
	I0513 22:43:04.818249   10004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0513 22:43:04.821439   10004 command_runner.go:130] > Certificate will not expire
	I0513 22:43:04.833841   10004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0513 22:43:04.836585   10004 command_runner.go:130] > Certificate will not expire
	I0513 22:43:04.848490   10004 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0513 22:43:04.854250   10004 command_runner.go:130] > Certificate will not expire
	I0513 22:43:04.856873   10004 kubeadm.go:391] StartCluster: {Name:functional-129600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-129600 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.96 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:43:04.864357   10004 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 22:43:04.893642   10004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0513 22:43:04.896573   10004 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0513 22:43:04.896573   10004 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0513 22:43:04.896573   10004 command_runner.go:130] > /var/lib/minikube/etcd:
	I0513 22:43:04.896573   10004 command_runner.go:130] > member
	W0513 22:43:04.909886   10004 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0513 22:43:04.909919   10004 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0513 22:43:04.909957   10004 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0513 22:43:04.919296   10004 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0513 22:43:04.933768   10004 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0513 22:43:04.934567   10004 kubeconfig.go:125] found "functional-129600" server: "https://172.23.102.96:8441"
	I0513 22:43:04.935219   10004 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:43:04.935868   10004 kapi.go:59] client config for functional-129600: &rest.Config{Host:"https://172.23.102.96:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-129600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-129600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 22:43:04.936517   10004 cert_rotation.go:137] Starting client certificate rotation controller
	I0513 22:43:04.945063   10004 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0513 22:43:04.951422   10004 kubeadm.go:624] The running cluster does not require reconfiguration: 172.23.102.96
	I0513 22:43:04.951422   10004 kubeadm.go:1154] stopping kube-system containers ...
	I0513 22:43:04.966290   10004 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 22:43:04.987745   10004 command_runner.go:130] > d828f53c208c
	I0513 22:43:04.991740   10004 command_runner.go:130] > 3bbe5ad0be35
	I0513 22:43:04.991740   10004 command_runner.go:130] > 76a9d4b6e76b
	I0513 22:43:04.991740   10004 command_runner.go:130] > cc0c521935e3
	I0513 22:43:04.991740   10004 command_runner.go:130] > 3fdecc4037c0
	I0513 22:43:04.991800   10004 command_runner.go:130] > 4459fa5b1345
	I0513 22:43:04.991800   10004 command_runner.go:130] > 198ce71b5893
	I0513 22:43:04.991800   10004 command_runner.go:130] > 4fb73e9cd2ae
	I0513 22:43:04.991800   10004 command_runner.go:130] > 56967669b9e1
	I0513 22:43:04.991800   10004 command_runner.go:130] > e8fcd4852641
	I0513 22:43:04.991800   10004 command_runner.go:130] > 4da60f423131
	I0513 22:43:04.991800   10004 command_runner.go:130] > 6ad63ef84a17
	I0513 22:43:04.991800   10004 command_runner.go:130] > 0cf9b41c6688
	I0513 22:43:04.991800   10004 command_runner.go:130] > 5f25f2ca9c5e
	I0513 22:43:04.993262   10004 docker.go:483] Stopping containers: [d828f53c208c 3bbe5ad0be35 76a9d4b6e76b cc0c521935e3 3fdecc4037c0 4459fa5b1345 198ce71b5893 4fb73e9cd2ae 56967669b9e1 e8fcd4852641 4da60f423131 6ad63ef84a17 0cf9b41c6688 5f25f2ca9c5e]
	I0513 22:43:05.003206   10004 ssh_runner.go:195] Run: docker stop d828f53c208c 3bbe5ad0be35 76a9d4b6e76b cc0c521935e3 3fdecc4037c0 4459fa5b1345 198ce71b5893 4fb73e9cd2ae 56967669b9e1 e8fcd4852641 4da60f423131 6ad63ef84a17 0cf9b41c6688 5f25f2ca9c5e
	I0513 22:43:05.021836   10004 command_runner.go:130] > d828f53c208c
	I0513 22:43:05.022816   10004 command_runner.go:130] > 3bbe5ad0be35
	I0513 22:43:05.022816   10004 command_runner.go:130] > 76a9d4b6e76b
	I0513 22:43:05.022816   10004 command_runner.go:130] > cc0c521935e3
	I0513 22:43:05.022816   10004 command_runner.go:130] > 3fdecc4037c0
	I0513 22:43:05.022816   10004 command_runner.go:130] > 4459fa5b1345
	I0513 22:43:05.022816   10004 command_runner.go:130] > 198ce71b5893
	I0513 22:43:05.022816   10004 command_runner.go:130] > 4fb73e9cd2ae
	I0513 22:43:05.022816   10004 command_runner.go:130] > 56967669b9e1
	I0513 22:43:05.022956   10004 command_runner.go:130] > e8fcd4852641
	I0513 22:43:05.022995   10004 command_runner.go:130] > 4da60f423131
	I0513 22:43:05.023045   10004 command_runner.go:130] > 6ad63ef84a17
	I0513 22:43:05.023045   10004 command_runner.go:130] > 0cf9b41c6688
	I0513 22:43:05.023045   10004 command_runner.go:130] > 5f25f2ca9c5e
	I0513 22:43:05.032907   10004 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0513 22:43:05.097235   10004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 22:43:05.114996   10004 command_runner.go:130] > -rw------- 1 root root 5651 May 13 22:41 /etc/kubernetes/admin.conf
	I0513 22:43:05.114996   10004 command_runner.go:130] > -rw------- 1 root root 5653 May 13 22:41 /etc/kubernetes/controller-manager.conf
	I0513 22:43:05.115095   10004 command_runner.go:130] > -rw------- 1 root root 2007 May 13 22:41 /etc/kubernetes/kubelet.conf
	I0513 22:43:05.115095   10004 command_runner.go:130] > -rw------- 1 root root 5605 May 13 22:41 /etc/kubernetes/scheduler.conf
	I0513 22:43:05.115171   10004 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 May 13 22:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 May 13 22:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 May 13 22:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 May 13 22:41 /etc/kubernetes/scheduler.conf
	
	I0513 22:43:05.122911   10004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0513 22:43:05.124792   10004 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0513 22:43:05.146187   10004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0513 22:43:05.153439   10004 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0513 22:43:05.169312   10004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0513 22:43:05.171274   10004 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0513 22:43:05.196161   10004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 22:43:05.218758   10004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0513 22:43:05.220374   10004 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0513 22:43:05.241910   10004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 22:43:05.269147   10004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 22:43:05.285023   10004 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0513 22:43:05.346504   10004 command_runner.go:130] > [certs] Using the existing "sa" key
	I0513 22:43:05.346504   10004 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 22:43:06.877486   10004 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0513 22:43:06.877558   10004 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0513 22:43:06.877620   10004 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0513 22:43:06.877620   10004 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0513 22:43:06.877679   10004 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0513 22:43:06.877679   10004 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0513 22:43:06.877741   10004 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.5311923s)
	I0513 22:43:06.877799   10004 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0513 22:43:07.128183   10004 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 22:43:07.128183   10004 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 22:43:07.128313   10004 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0513 22:43:07.128313   10004 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 22:43:07.201230   10004 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0513 22:43:07.201230   10004 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0513 22:43:07.202713   10004 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0513 22:43:07.202713   10004 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0513 22:43:07.202791   10004 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0513 22:43:07.290652   10004 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0513 22:43:07.290652   10004 api_server.go:52] waiting for apiserver process to appear ...
	I0513 22:43:07.300631   10004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 22:43:07.802463   10004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 22:43:08.303330   10004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 22:43:08.810224   10004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 22:43:08.835030   10004 command_runner.go:130] > 4623
	I0513 22:43:08.837322   10004 api_server.go:72] duration metric: took 1.546541s to wait for apiserver process to appear ...
	I0513 22:43:08.837322   10004 api_server.go:88] waiting for apiserver healthz status ...
	I0513 22:43:08.837322   10004 api_server.go:253] Checking apiserver healthz at https://172.23.102.96:8441/healthz ...
	I0513 22:43:12.214405   10004 api_server.go:279] https://172.23.102.96:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0513 22:43:12.214405   10004 api_server.go:103] status: https://172.23.102.96:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0513 22:43:12.214405   10004 api_server.go:253] Checking apiserver healthz at https://172.23.102.96:8441/healthz ...
	I0513 22:43:12.254860   10004 api_server.go:279] https://172.23.102.96:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0513 22:43:12.262421   10004 api_server.go:103] status: https://172.23.102.96:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0513 22:43:12.351744   10004 api_server.go:253] Checking apiserver healthz at https://172.23.102.96:8441/healthz ...
	I0513 22:43:12.364581   10004 api_server.go:279] https://172.23.102.96:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0513 22:43:12.370576   10004 api_server.go:103] status: https://172.23.102.96:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0513 22:43:12.839044   10004 api_server.go:253] Checking apiserver healthz at https://172.23.102.96:8441/healthz ...
	I0513 22:43:12.849609   10004 api_server.go:279] https://172.23.102.96:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0513 22:43:12.849609   10004 api_server.go:103] status: https://172.23.102.96:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0513 22:43:13.343732   10004 api_server.go:253] Checking apiserver healthz at https://172.23.102.96:8441/healthz ...
	I0513 22:43:13.352658   10004 api_server.go:279] https://172.23.102.96:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0513 22:43:13.353065   10004 api_server.go:103] status: https://172.23.102.96:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0513 22:43:13.848851   10004 api_server.go:253] Checking apiserver healthz at https://172.23.102.96:8441/healthz ...
	I0513 22:43:13.854995   10004 api_server.go:279] https://172.23.102.96:8441/healthz returned 200:
	ok
	I0513 22:43:13.857824   10004 round_trippers.go:463] GET https://172.23.102.96:8441/version
	I0513 22:43:13.857855   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:13.857855   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:13.857855   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:13.872322   10004 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0513 22:43:13.872661   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:13.872661   10004 round_trippers.go:580]     Audit-Id: 0671c593-f393-4ed2-82a6-2dfee6afe535
	I0513 22:43:13.872661   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:13.872712   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:13.872712   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:13.872712   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:13.872712   10004 round_trippers.go:580]     Content-Length: 263
	I0513 22:43:13.872759   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:14 GMT
	I0513 22:43:13.872818   10004 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0513 22:43:13.872968   10004 api_server.go:141] control plane version: v1.30.0
	I0513 22:43:13.872990   10004 api_server.go:131] duration metric: took 5.0355221s to wait for apiserver health ...
	I0513 22:43:13.872990   10004 cni.go:84] Creating CNI manager for ""
	I0513 22:43:13.873046   10004 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 22:43:13.875244   10004 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0513 22:43:13.885701   10004 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0513 22:43:13.906819   10004 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0513 22:43:13.947719   10004 system_pods.go:43] waiting for kube-system pods to appear ...
	I0513 22:43:13.947719   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods
	I0513 22:43:13.947719   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:13.947719   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:13.947719   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:13.954013   10004 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:43:13.954013   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:13.954013   10004 round_trippers.go:580]     Audit-Id: a3078b05-8684-465d-87c0-242a13e7e6d5
	I0513 22:43:13.955252   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:13.955252   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:13.955252   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:13.955252   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:13.955252   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:14 GMT
	I0513 22:43:13.956735   10004 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"495"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"494","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51525 chars]
	I0513 22:43:13.961259   10004 system_pods.go:59] 7 kube-system pods found
	I0513 22:43:13.961259   10004 system_pods.go:61] "coredns-7db6d8ff4d-hgbp9" [ede517b1-d13d-4817-8f90-401820281717] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0513 22:43:13.961329   10004 system_pods.go:61] "etcd-functional-129600" [7b41cd03-8c9b-497e-b568-e9854da00b7f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0513 22:43:13.961329   10004 system_pods.go:61] "kube-apiserver-functional-129600" [aaf5324c-fc6b-49af-8b7b-447cbddba2b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0513 22:43:13.961329   10004 system_pods.go:61] "kube-controller-manager-functional-129600" [02095aff-5f3d-4d58-907a-8ced211397b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0513 22:43:13.961404   10004 system_pods.go:61] "kube-proxy-d986q" [a65bf6f4-02c7-4c6c-a145-4b4a1fa636f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0513 22:43:13.961404   10004 system_pods.go:61] "kube-scheduler-functional-129600" [de7f847c-b5de-41b8-8f77-0f55588ac955] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0513 22:43:13.961404   10004 system_pods.go:61] "storage-provisioner" [1bab2554-ed75-4ec0-a1a0-bff155677696] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0513 22:43:13.961404   10004 system_pods.go:74] duration metric: took 13.6851ms to wait for pod list to return data ...
	I0513 22:43:13.961404   10004 node_conditions.go:102] verifying NodePressure condition ...
	I0513 22:43:13.961578   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes
	I0513 22:43:13.961578   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:13.961578   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:13.961578   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:13.966241   10004 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 22:43:13.966241   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:13.966241   10004 round_trippers.go:580]     Audit-Id: 9c1b8966-eda0-4136-9b88-5ba1ff90f31b
	I0513 22:43:13.966241   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:13.966241   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:13.966241   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:13.966241   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:13.966241   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:14 GMT
	I0513 22:43:13.966528   10004 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"495"},"items":[{"metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0513 22:43:13.967000   10004 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 22:43:13.967000   10004 node_conditions.go:123] node cpu capacity is 2
	I0513 22:43:13.967000   10004 node_conditions.go:105] duration metric: took 5.5956ms to run NodePressure ...
	I0513 22:43:13.967000   10004 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 22:43:14.431549   10004 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0513 22:43:14.431549   10004 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0513 22:43:14.431549   10004 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0513 22:43:14.433124   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0513 22:43:14.433124   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:14.433153   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:14.433153   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:14.437442   10004 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 22:43:14.437442   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:14.438004   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:14.438104   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:14.438132   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:14 GMT
	I0513 22:43:14.438132   10004 round_trippers.go:580]     Audit-Id: 91b8a21c-763e-44e0-b877-2d54a8a99283
	I0513 22:43:14.438164   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:14.438164   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:14.438945   10004 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"etcd-functional-129600","namespace":"kube-system","uid":"7b41cd03-8c9b-497e-b568-e9854da00b7f","resourceVersion":"490","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.96:2379","kubernetes.io/config.hash":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.mirror":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.seen":"2024-05-13T22:41:20.783390925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30957 chars]
	I0513 22:43:14.440179   10004 kubeadm.go:733] kubelet initialised
	I0513 22:43:14.440179   10004 kubeadm.go:734] duration metric: took 8.6298ms waiting for restarted kubelet to initialise ...
	I0513 22:43:14.440179   10004 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 22:43:14.440179   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods
	I0513 22:43:14.440179   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:14.440179   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:14.440179   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:14.448982   10004 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 22:43:14.448982   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:14.448982   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:14.448982   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:14 GMT
	I0513 22:43:14.448982   10004 round_trippers.go:580]     Audit-Id: f294fe3b-1db3-43f1-8b3a-b207031e53d0
	I0513 22:43:14.448982   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:14.448982   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:14.448982   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:14.453153   10004 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"494","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51525 chars]
	I0513 22:43:14.454860   10004 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-hgbp9" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:14.455430   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hgbp9
	I0513 22:43:14.455488   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:14.455488   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:14.455488   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:14.455776   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:14.455776   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:14.455776   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:14.455776   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:14 GMT
	I0513 22:43:14.455776   10004 round_trippers.go:580]     Audit-Id: 055c0ab7-275b-4083-9eac-63dd0f13e8a2
	I0513 22:43:14.455776   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:14.455776   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:14.455776   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:14.458769   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"494","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0513 22:43:14.459016   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:14.459016   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:14.459016   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:14.459016   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:14.466803   10004 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 22:43:14.466803   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:14.466803   10004 round_trippers.go:580]     Audit-Id: e19cf79d-552d-4fe7-b03e-29d78e45522b
	I0513 22:43:14.466803   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:14.466803   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:14.466803   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:14.466803   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:14.466803   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:14 GMT
	I0513 22:43:14.470388   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:14.960516   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hgbp9
	I0513 22:43:14.960516   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:14.960516   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:14.960724   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:14.970330   10004 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 22:43:14.970930   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:14.970930   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:14.970930   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:14.970930   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:14.970930   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:15 GMT
	I0513 22:43:14.970986   10004 round_trippers.go:580]     Audit-Id: cff60bb9-294b-473b-9eb6-f1b800c92318
	I0513 22:43:14.970986   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:14.971122   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"494","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0513 22:43:14.971922   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:14.971961   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:14.971997   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:14.971997   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:14.972229   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:14.972229   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:14.972229   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:14.972229   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:14.972229   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:15 GMT
	I0513 22:43:14.972229   10004 round_trippers.go:580]     Audit-Id: 4e2262dd-e4d9-4f6d-83bd-fe6dd5e1500f
	I0513 22:43:14.972229   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:14.975406   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:14.975801   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:15.457305   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hgbp9
	I0513 22:43:15.457859   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:15.457859   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:15.457859   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:15.458108   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:15.458108   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:15.458108   10004 round_trippers.go:580]     Audit-Id: c2ea27b8-af2b-460a-bf14-cf4140945136
	I0513 22:43:15.458108   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:15.458108   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:15.458108   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:15.461141   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:15.461141   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:15 GMT
	I0513 22:43:15.461364   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"500","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0513 22:43:15.461985   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:15.462058   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:15.462058   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:15.462058   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:15.468633   10004 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:43:15.468633   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:15.468633   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:15.468633   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:15.468633   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:15.468633   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:15 GMT
	I0513 22:43:15.468633   10004 round_trippers.go:580]     Audit-Id: 0641763b-56fc-446c-adb7-acc3bf398ef7
	I0513 22:43:15.468633   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:15.468633   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:15.957906   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hgbp9
	I0513 22:43:15.958029   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:15.958029   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:15.958029   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:15.962437   10004 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 22:43:15.962437   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:15.962437   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:15.962437   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:16 GMT
	I0513 22:43:15.962437   10004 round_trippers.go:580]     Audit-Id: cc87a13f-99ec-4970-b611-d57d352690e8
	I0513 22:43:15.962437   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:15.962437   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:15.962437   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:15.962437   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"500","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0513 22:43:15.963627   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:15.963717   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:15.963717   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:15.963717   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:15.967414   10004 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 22:43:15.967478   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:15.967478   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:15.967478   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:15.967478   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:16 GMT
	I0513 22:43:15.967478   10004 round_trippers.go:580]     Audit-Id: f3ad33e2-8f47-4c95-b91d-7bd30a13ba41
	I0513 22:43:15.967478   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:15.967478   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:15.967478   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:16.469522   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hgbp9
	I0513 22:43:16.469522   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:16.469522   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:16.469522   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:16.470254   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:16.470254   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:16.470254   10004 round_trippers.go:580]     Audit-Id: 53e8f433-423f-4a88-bc24-a1489d912c4f
	I0513 22:43:16.470254   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:16.470254   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:16.470254   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:16.470254   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:16.470254   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:16 GMT
	I0513 22:43:16.473995   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"500","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0513 22:43:16.474630   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:16.474630   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:16.474630   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:16.474630   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:16.477086   10004 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 22:43:16.477086   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:16.477086   10004 round_trippers.go:580]     Audit-Id: 4a6fb6ab-e7ae-4357-bb33-ed14e2aa86c3
	I0513 22:43:16.477086   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:16.477086   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:16.477086   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:16.477086   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:16.477086   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:16 GMT
	I0513 22:43:16.478135   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:16.478721   10004 pod_ready.go:102] pod "coredns-7db6d8ff4d-hgbp9" in "kube-system" namespace has status "Ready":"False"
	I0513 22:43:16.970844   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hgbp9
	I0513 22:43:16.970844   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:16.970844   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:16.970844   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:16.971607   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:16.971607   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:16.971607   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:16.971607   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:16.971607   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:16.971607   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:17 GMT
	I0513 22:43:16.971607   10004 round_trippers.go:580]     Audit-Id: 3dd19562-2c03-431c-a055-ba6bf3e4e0c4
	I0513 22:43:16.971607   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:16.974528   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"502","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0513 22:43:16.975138   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:16.975213   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:16.975213   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:16.975213   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:16.975399   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:16.978078   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:16.978078   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:16.978078   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:17 GMT
	I0513 22:43:16.978078   10004 round_trippers.go:580]     Audit-Id: a861d8e4-067b-4c10-abd1-e5d65c8db6e7
	I0513 22:43:16.978078   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:16.978078   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:16.978078   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:16.978248   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:16.978643   10004 pod_ready.go:92] pod "coredns-7db6d8ff4d-hgbp9" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:16.978715   10004 pod_ready.go:81] duration metric: took 2.5237827s for pod "coredns-7db6d8ff4d-hgbp9" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:16.978715   10004 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:16.978791   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/etcd-functional-129600
	I0513 22:43:16.978791   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:16.978865   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:16.978865   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:16.984266   10004 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:43:16.984266   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:16.984266   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:16.984266   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:17 GMT
	I0513 22:43:16.984266   10004 round_trippers.go:580]     Audit-Id: f1917dfa-0e83-4926-ac1b-0e191c6af094
	I0513 22:43:16.984266   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:16.984266   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:16.984266   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:16.984266   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-129600","namespace":"kube-system","uid":"7b41cd03-8c9b-497e-b568-e9854da00b7f","resourceVersion":"490","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.96:2379","kubernetes.io/config.hash":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.mirror":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.seen":"2024-05-13T22:41:20.783390925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0513 22:43:16.985770   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:16.985802   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:16.985802   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:16.985843   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:16.987137   10004 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 22:43:16.987137   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:16.987137   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:16.987137   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:17 GMT
	I0513 22:43:16.987137   10004 round_trippers.go:580]     Audit-Id: 800ea51c-c0a6-451d-98de-f507d772fd3f
	I0513 22:43:16.987137   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:16.987137   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:16.987137   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:16.987137   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:17.487931   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/etcd-functional-129600
	I0513 22:43:17.487931   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:17.487931   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:17.487931   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:17.488602   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:17.488602   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:17.488602   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:17.488602   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:17 GMT
	I0513 22:43:17.492480   10004 round_trippers.go:580]     Audit-Id: e0b973cd-846a-4041-bd1a-3e49189edd0a
	I0513 22:43:17.492480   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:17.492480   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:17.492480   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:17.492690   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-129600","namespace":"kube-system","uid":"7b41cd03-8c9b-497e-b568-e9854da00b7f","resourceVersion":"490","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.96:2379","kubernetes.io/config.hash":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.mirror":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.seen":"2024-05-13T22:41:20.783390925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0513 22:43:17.493751   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:17.493751   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:17.493832   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:17.493832   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:17.494065   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:17.494065   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:17.494065   10004 round_trippers.go:580]     Audit-Id: a5888021-3e1d-415d-bd96-282076b312e8
	I0513 22:43:17.494065   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:17.494065   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:17.494065   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:17.494065   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:17.494065   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:17 GMT
	I0513 22:43:17.496962   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:17.985003   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/etcd-functional-129600
	I0513 22:43:17.985003   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:17.985003   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:17.985003   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:17.989369   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:17.989369   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:17.989369   10004 round_trippers.go:580]     Audit-Id: 2f708950-2f58-44b3-8f69-580c1e6752b0
	I0513 22:43:17.989369   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:17.989369   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:17.989369   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:17.989369   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:17.989369   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:18 GMT
	I0513 22:43:17.989369   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-129600","namespace":"kube-system","uid":"7b41cd03-8c9b-497e-b568-e9854da00b7f","resourceVersion":"490","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.96:2379","kubernetes.io/config.hash":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.mirror":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.seen":"2024-05-13T22:41:20.783390925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0513 22:43:17.989966   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:17.989966   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:17.989966   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:17.989966   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:17.990616   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:17.990616   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:17.990616   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:17.990616   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:17.990616   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:18 GMT
	I0513 22:43:17.990616   10004 round_trippers.go:580]     Audit-Id: 4620be77-afd4-4308-86f7-63b2a5c272a9
	I0513 22:43:17.990616   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:17.994016   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:17.994304   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:18.494401   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/etcd-functional-129600
	I0513 22:43:18.494495   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:18.494495   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:18.494495   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:18.502967   10004 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 22:43:18.502967   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:18.502967   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:18.502967   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:18 GMT
	I0513 22:43:18.502967   10004 round_trippers.go:580]     Audit-Id: ee72acde-1e8c-4592-a4e5-70520939a261
	I0513 22:43:18.502967   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:18.502967   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:18.502967   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:18.503507   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-129600","namespace":"kube-system","uid":"7b41cd03-8c9b-497e-b568-e9854da00b7f","resourceVersion":"490","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.96:2379","kubernetes.io/config.hash":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.mirror":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.seen":"2024-05-13T22:41:20.783390925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0513 22:43:18.504239   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:18.504239   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:18.504239   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:18.504239   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:18.504912   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:18.504912   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:18.504912   10004 round_trippers.go:580]     Audit-Id: a418de42-7238-4ad6-9150-459c3e108b23
	I0513 22:43:18.504912   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:18.504912   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:18.504912   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:18.504912   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:18.504912   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:18 GMT
	I0513 22:43:18.506934   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:18.985366   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/etcd-functional-129600
	I0513 22:43:18.985366   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:18.985366   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:18.985366   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:18.985707   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:18.989280   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:18.989280   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:18.989280   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:18.989280   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:18.989280   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:19 GMT
	I0513 22:43:18.989280   10004 round_trippers.go:580]     Audit-Id: e7ea635c-2f0c-4ab3-a1e5-27622c7aa88a
	I0513 22:43:18.989280   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:18.990111   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-129600","namespace":"kube-system","uid":"7b41cd03-8c9b-497e-b568-e9854da00b7f","resourceVersion":"490","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.96:2379","kubernetes.io/config.hash":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.mirror":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.seen":"2024-05-13T22:41:20.783390925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0513 22:43:18.990735   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:18.990735   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:18.990735   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:18.990735   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:18.990983   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:18.990983   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:18.990983   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:18.990983   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:19 GMT
	I0513 22:43:18.990983   10004 round_trippers.go:580]     Audit-Id: bb5e7c37-4d59-4e19-bf0f-9b2a5e44393a
	I0513 22:43:18.990983   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:18.990983   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:18.990983   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:18.993815   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:18.994535   10004 pod_ready.go:102] pod "etcd-functional-129600" in "kube-system" namespace has status "Ready":"False"
	I0513 22:43:19.488877   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/etcd-functional-129600
	I0513 22:43:19.489102   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:19.489102   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:19.489194   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:19.489483   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:19.489483   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:19.493507   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:19.493507   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:19.493507   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:19.493507   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:19 GMT
	I0513 22:43:19.493507   10004 round_trippers.go:580]     Audit-Id: 6c213d34-f50e-40aa-962c-a1a7bca2093a
	I0513 22:43:19.493507   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:19.493786   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-129600","namespace":"kube-system","uid":"7b41cd03-8c9b-497e-b568-e9854da00b7f","resourceVersion":"490","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.96:2379","kubernetes.io/config.hash":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.mirror":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.seen":"2024-05-13T22:41:20.783390925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6597 chars]
	I0513 22:43:19.494770   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:19.494770   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:19.494770   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:19.494837   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:19.497708   10004 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 22:43:19.497772   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:19.497772   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:19 GMT
	I0513 22:43:19.497772   10004 round_trippers.go:580]     Audit-Id: 2659409e-1e27-4c53-b68b-a6106119414c
	I0513 22:43:19.497772   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:19.497772   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:19.497772   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:19.497772   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:19.497772   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:19.986925   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/etcd-functional-129600
	I0513 22:43:19.986925   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:19.986925   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:19.986925   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:19.987413   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:19.987413   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:19.987413   10004 round_trippers.go:580]     Audit-Id: 8ba4ce08-cf8c-4735-84e1-3f7fe94ab0bb
	I0513 22:43:19.987413   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:19.987413   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:19.987413   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:19.987413   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:19.987413   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:20 GMT
	I0513 22:43:19.991496   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-129600","namespace":"kube-system","uid":"7b41cd03-8c9b-497e-b568-e9854da00b7f","resourceVersion":"557","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.96:2379","kubernetes.io/config.hash":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.mirror":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.seen":"2024-05-13T22:41:20.783390925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6596 chars]
	I0513 22:43:19.992450   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:19.992450   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:19.992450   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:19.992450   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:19.998880   10004 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:43:19.998880   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:19.998880   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:19.999421   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:20 GMT
	I0513 22:43:19.999421   10004 round_trippers.go:580]     Audit-Id: 89dd69f5-7205-49ec-ae9c-fa29298526a3
	I0513 22:43:19.999421   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:19.999421   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:19.999421   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:19.999844   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:20.485829   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/etcd-functional-129600
	I0513 22:43:20.485829   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:20.485829   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:20.485937   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:20.486212   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:20.486212   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:20.486212   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:20 GMT
	I0513 22:43:20.486212   10004 round_trippers.go:580]     Audit-Id: 13a1736e-eec9-416f-9ac7-765f863a59f7
	I0513 22:43:20.486212   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:20.486212   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:20.486212   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:20.486212   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:20.489671   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-129600","namespace":"kube-system","uid":"7b41cd03-8c9b-497e-b568-e9854da00b7f","resourceVersion":"558","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.96:2379","kubernetes.io/config.hash":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.mirror":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.seen":"2024-05-13T22:41:20.783390925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6373 chars]
	I0513 22:43:20.490521   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:20.490604   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:20.490604   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:20.490604   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:20.492109   10004 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 22:43:20.492109   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:20.492109   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:20 GMT
	I0513 22:43:20.492109   10004 round_trippers.go:580]     Audit-Id: 11922c3c-f1cd-4894-b4ff-0e2f2fb978d7
	I0513 22:43:20.492109   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:20.492109   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:20.492109   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:20.492109   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:20.493636   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:20.493636   10004 pod_ready.go:92] pod "etcd-functional-129600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:20.494167   10004 pod_ready.go:81] duration metric: took 3.51535s for pod "etcd-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:20.494167   10004 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:20.494321   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:20.494321   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:20.494321   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:20.494321   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:20.496278   10004 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 22:43:20.496278   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:20.496278   10004 round_trippers.go:580]     Audit-Id: 79d8235a-6b87-45ab-bd5b-0aac0c1f42f7
	I0513 22:43:20.496278   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:20.496278   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:20.496278   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:20.496278   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:20.497530   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:20 GMT
	I0513 22:43:20.497851   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:20.497950   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:20.497950   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:20.497950   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:20.497950   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:20.498694   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:20.498694   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:20.498694   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:20.498694   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:20.498694   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:20.498694   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:20.498694   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:20 GMT
	I0513 22:43:20.498694   10004 round_trippers.go:580]     Audit-Id: a18c8549-53d6-4240-817b-578588ee86eb
	I0513 22:43:20.501241   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:21.005174   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:21.005174   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:21.005174   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:21.005174   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:21.005797   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:21.005797   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:21.005797   10004 round_trippers.go:580]     Audit-Id: 6d95a409-78f3-40b8-ba66-69e0b3318804
	I0513 22:43:21.005797   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:21.005797   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:21.009361   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:21.009361   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:21.009361   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:21 GMT
	I0513 22:43:21.009757   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:21.010812   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:21.010894   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:21.010894   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:21.010969   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:21.013995   10004 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 22:43:21.013995   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:21.013995   10004 round_trippers.go:580]     Audit-Id: 2c7a38f4-cd90-4d8f-8288-3295f230de4c
	I0513 22:43:21.013995   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:21.013995   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:21.013995   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:21.013995   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:21.013995   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:21 GMT
	I0513 22:43:21.013995   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:21.507520   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:21.507596   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:21.507673   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:21.507673   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:21.513194   10004 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:43:21.513194   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:21.513194   10004 round_trippers.go:580]     Audit-Id: b46fee52-ea0f-410f-8ca1-c9ee70da8272
	I0513 22:43:21.513194   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:21.513194   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:21.513194   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:21.513194   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:21.513194   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:21 GMT
	I0513 22:43:21.513194   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:21.514403   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:21.514403   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:21.514403   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:21.514403   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:21.516763   10004 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 22:43:21.516763   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:21.516763   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:21 GMT
	I0513 22:43:21.516763   10004 round_trippers.go:580]     Audit-Id: e851310b-2766-42fb-9014-9d8bb67b8e9d
	I0513 22:43:21.516763   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:21.516763   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:21.516763   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:21.516763   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:21.516763   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:22.008918   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:22.008918   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:22.008918   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:22.008918   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:22.009323   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:22.013220   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:22.013220   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:22.013220   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:22.013220   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:22.013220   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:22 GMT
	I0513 22:43:22.013220   10004 round_trippers.go:580]     Audit-Id: b5904c60-c7c6-46a7-a6a6-69529e7ab4a5
	I0513 22:43:22.013220   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:22.013942   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:22.015000   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:22.015000   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:22.015088   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:22.015088   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:22.017316   10004 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 22:43:22.018279   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:22.018279   10004 round_trippers.go:580]     Audit-Id: 79b4ca8b-7661-4d26-9c66-ae596a933934
	I0513 22:43:22.018279   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:22.018279   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:22.018279   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:22.018279   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:22.018279   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:22 GMT
	I0513 22:43:22.018279   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:22.500727   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:22.500727   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:22.500813   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:22.500813   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:22.510043   10004 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 22:43:22.510043   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:22.510043   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:22.510043   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:22.510043   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:22.510043   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:22 GMT
	I0513 22:43:22.510043   10004 round_trippers.go:580]     Audit-Id: e908d4e5-a06c-4294-b314-720565c745a6
	I0513 22:43:22.510043   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:22.510043   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:22.511420   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:22.511484   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:22.511484   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:22.511484   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:22.512104   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:22.512104   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:22.512104   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:22.512104   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:22.512104   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:22.512104   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:22 GMT
	I0513 22:43:22.512104   10004 round_trippers.go:580]     Audit-Id: 8c042b37-64dc-4bb8-861c-e04e39e0d426
	I0513 22:43:22.512104   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:22.514359   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:22.514702   10004 pod_ready.go:102] pod "kube-apiserver-functional-129600" in "kube-system" namespace has status "Ready":"False"
	I0513 22:43:22.994814   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:22.994814   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:22.994814   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:22.994814   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:22.998436   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:22.998436   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:22.998515   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:22.998515   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:22.998515   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:22.998515   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:22.998515   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:23 GMT
	I0513 22:43:22.998611   10004 round_trippers.go:580]     Audit-Id: 1d2c1f94-fa89-40bc-a309-cf244e973772
	I0513 22:43:22.998763   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:22.999373   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:22.999458   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:22.999458   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:22.999458   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:23.000167   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:23.000167   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:23.000167   10004 round_trippers.go:580]     Audit-Id: dd88b845-15b1-4e33-8f34-9753bcf0eb82
	I0513 22:43:23.000167   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:23.000167   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:23.000167   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:23.000167   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:23.000167   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:23 GMT
	I0513 22:43:23.002349   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:23.509269   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:23.509495   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:23.509495   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:23.509495   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:23.509966   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:23.509966   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:23.513812   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:23.513812   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:23.513812   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:23 GMT
	I0513 22:43:23.513812   10004 round_trippers.go:580]     Audit-Id: 458ccade-65b5-41bb-97c3-d3c6f8c32771
	I0513 22:43:23.513812   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:23.513812   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:23.513812   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:23.514891   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:23.514990   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:23.514990   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:23.514990   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:23.518128   10004 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 22:43:23.518128   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:23.518128   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:23.518128   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:23.518128   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:23.518128   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:23.518128   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:23 GMT
	I0513 22:43:23.518128   10004 round_trippers.go:580]     Audit-Id: bd726186-c9b4-4001-8d82-7762f1b58b87
	I0513 22:43:23.518128   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:24.002427   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:24.002506   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:24.002506   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:24.002506   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:24.008328   10004 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:43:24.008328   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:24.008328   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:24.008328   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:24.008328   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:24.008328   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:24 GMT
	I0513 22:43:24.008328   10004 round_trippers.go:580]     Audit-Id: efc8e6ff-6dd1-472a-ad18-0d05e8f22880
	I0513 22:43:24.008328   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:24.009549   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:24.010207   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:24.010207   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:24.010207   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:24.010207   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:24.010828   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:24.010828   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:24.010828   10004 round_trippers.go:580]     Audit-Id: 7da71b93-8161-499a-b7db-43c8c985c69e
	I0513 22:43:24.010828   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:24.010828   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:24.010828   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:24.010828   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:24.010828   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:24 GMT
	I0513 22:43:24.010828   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:24.502109   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:24.502172   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:24.502234   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:24.502234   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:24.502547   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:24.502547   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:24.502547   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:24.502547   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:24.502547   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:24 GMT
	I0513 22:43:24.502547   10004 round_trippers.go:580]     Audit-Id: 03f32fe5-06ae-49ad-8f2c-f453c30131fc
	I0513 22:43:24.502547   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:24.505739   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:24.506079   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:24.506877   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:24.506877   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:24.506877   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:24.506877   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:24.512637   10004 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:43:24.512637   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:24.512637   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:24.512637   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:24 GMT
	I0513 22:43:24.512637   10004 round_trippers.go:580]     Audit-Id: 45ef8fa4-df09-4a36-a458-ffa248863481
	I0513 22:43:24.512637   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:24.512637   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:24.512637   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:24.512637   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:24.999330   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:24.999475   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:24.999475   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:24.999475   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:25.006166   10004 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 22:43:25.006166   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:25.006166   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:25.006238   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:25.006238   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:25 GMT
	I0513 22:43:25.006238   10004 round_trippers.go:580]     Audit-Id: de17295e-e62a-4d5f-86fa-bdcb2270dd3e
	I0513 22:43:25.006238   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:25.006238   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:25.006411   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:25.007089   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:25.007132   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:25.007132   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:25.007132   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:25.012429   10004 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:43:25.012551   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:25.012551   10004 round_trippers.go:580]     Audit-Id: 62026af6-de57-4dd2-8d62-8346a3b7d137
	I0513 22:43:25.012551   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:25.012551   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:25.012551   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:25.012551   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:25.012551   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:25 GMT
	I0513 22:43:25.012731   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:25.013153   10004 pod_ready.go:102] pod "kube-apiserver-functional-129600" in "kube-system" namespace has status "Ready":"False"
	I0513 22:43:25.514826   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:25.514826   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:25.514826   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:25.514937   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:25.523031   10004 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 22:43:25.523031   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:25.523031   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:25.523031   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:25.523031   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:25.523031   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:25.523031   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:25 GMT
	I0513 22:43:25.523031   10004 round_trippers.go:580]     Audit-Id: 71f0724e-6da2-4e2a-8c14-42e904cf5c63
	I0513 22:43:25.523031   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:25.525057   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:25.525107   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:25.525107   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:25.525107   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:25.527886   10004 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 22:43:25.527886   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:25.527886   10004 round_trippers.go:580]     Audit-Id: f87ecbe8-bc44-4d07-9d0b-de36e9080879
	I0513 22:43:25.527886   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:25.528020   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:25.528020   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:25.528020   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:25.528020   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:25 GMT
	I0513 22:43:25.528610   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:25.998016   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:25.998421   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:25.998421   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:25.998421   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:26.003988   10004 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:43:26.003988   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:26.003988   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:26.003988   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:26.003988   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:26.003988   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:26 GMT
	I0513 22:43:26.003988   10004 round_trippers.go:580]     Audit-Id: dba5830f-9bfc-423e-a077-ede2784cbf3f
	I0513 22:43:26.003988   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:26.003988   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:26.004712   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:26.004712   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:26.004712   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:26.004712   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:26.007097   10004 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 22:43:26.007097   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:26.007097   10004 round_trippers.go:580]     Audit-Id: 8db9e11c-cf0b-4a85-81c3-098dd2380c6e
	I0513 22:43:26.007097   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:26.007097   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:26.007097   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:26.007097   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:26.007097   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:26 GMT
	I0513 22:43:26.008176   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:26.514650   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:26.514650   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:26.514780   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:26.514780   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:26.515090   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:26.518283   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:26.518283   10004 round_trippers.go:580]     Audit-Id: 98639583-6ec6-4b76-b591-19d8847b63c0
	I0513 22:43:26.518283   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:26.518283   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:26.518283   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:26.518283   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:26.518283   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:26 GMT
	I0513 22:43:26.518740   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:26.519401   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:26.519471   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:26.519471   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:26.519471   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:26.524484   10004 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:43:26.524484   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:26.524484   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:26.524484   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:26.524484   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:26.524484   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:26.524484   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:26 GMT
	I0513 22:43:26.524484   10004 round_trippers.go:580]     Audit-Id: 86760b76-eb3a-4c12-b23e-e8f4d7ecd20e
	I0513 22:43:26.524484   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:27.002493   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:27.002493   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.002493   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.002493   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.003034   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:27.003034   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.003034   10004 round_trippers.go:580]     Audit-Id: 585a0d19-64ce-4eb1-9940-23180157577c
	I0513 22:43:27.003034   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.003034   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.007571   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.007571   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.007571   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:27 GMT
	I0513 22:43:27.007901   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"491","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8148 chars]
	I0513 22:43:27.008163   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:27.008163   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.008163   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.008700   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.008889   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:27.008889   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.008889   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.008889   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.008889   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.008889   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:27 GMT
	I0513 22:43:27.008889   10004 round_trippers.go:580]     Audit-Id: 22a5b534-2961-4388-ade5-7cedee9ba205
	I0513 22:43:27.008889   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.011565   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:27.501672   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:27.501672   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.501672   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.501672   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.515112   10004 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0513 22:43:27.515112   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.515112   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.515112   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:27 GMT
	I0513 22:43:27.515112   10004 round_trippers.go:580]     Audit-Id: 92b22cf4-d9b8-4815-87ad-565e6e59719a
	I0513 22:43:27.515112   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.515112   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.515112   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.515112   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"570","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7904 chars]
	I0513 22:43:27.516096   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:27.516096   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.516096   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.516167   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.516387   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:27.519218   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.519218   10004 round_trippers.go:580]     Audit-Id: 7fb631e8-129d-456b-9923-9b178fdc90a5
	I0513 22:43:27.519218   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.519218   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.519218   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.519218   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.519316   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:27 GMT
	I0513 22:43:27.519316   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:27.520055   10004 pod_ready.go:92] pod "kube-apiserver-functional-129600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:27.520129   10004 pod_ready.go:81] duration metric: took 7.0257583s for pod "kube-apiserver-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:27.520152   10004 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:27.520298   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-129600
	I0513 22:43:27.520298   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.520373   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.520373   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.525525   10004 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:43:27.525525   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.525525   10004 round_trippers.go:580]     Audit-Id: c8f7a502-6dcd-43af-914f-c2d003277340
	I0513 22:43:27.525525   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.525525   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.525525   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.525525   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.525525   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:27 GMT
	I0513 22:43:27.526563   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-129600","namespace":"kube-system","uid":"02095aff-5f3d-4d58-907a-8ced211397b9","resourceVersion":"568","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46a88326cb15a7a0288b3c5bb493d896","kubernetes.io/config.mirror":"46a88326cb15a7a0288b3c5bb493d896","kubernetes.io/config.seen":"2024-05-13T22:41:20.783395725Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0513 22:43:27.526751   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:27.526751   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.526751   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.526751   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.528383   10004 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 22:43:27.528383   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.528383   10004 round_trippers.go:580]     Audit-Id: 8a003bbd-f648-4716-adee-388d658e0eda
	I0513 22:43:27.528383   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.528383   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.528383   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.528383   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.528383   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:27 GMT
	I0513 22:43:27.528383   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:27.530193   10004 pod_ready.go:92] pod "kube-controller-manager-functional-129600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:27.530222   10004 pod_ready.go:81] duration metric: took 10.0703ms for pod "kube-controller-manager-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:27.530222   10004 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d986q" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:27.530308   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-proxy-d986q
	I0513 22:43:27.530308   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.530308   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.530360   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.532894   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:27.532894   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.532894   10004 round_trippers.go:580]     Audit-Id: e968e1e2-1db3-432b-956e-87b23219aae9
	I0513 22:43:27.532894   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.532894   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.532894   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.532894   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.532894   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:27 GMT
	I0513 22:43:27.532894   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d986q","generateName":"kube-proxy-","namespace":"kube-system","uid":"a65bf6f4-02c7-4c6c-a145-4b4a1fa636f4","resourceVersion":"501","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"551dbc0f-be9e-44ad-b58c-a064f3c5df59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"551dbc0f-be9e-44ad-b58c-a064f3c5df59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0513 22:43:27.533645   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:27.533645   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.533645   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.533645   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.535778   10004 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 22:43:27.535778   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.535778   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:27 GMT
	I0513 22:43:27.535778   10004 round_trippers.go:580]     Audit-Id: c7211e3d-143a-44f2-8ae2-f03906b58941
	I0513 22:43:27.536304   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.536304   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.536304   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.536304   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.536522   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:27.536868   10004 pod_ready.go:92] pod "kube-proxy-d986q" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:27.536868   10004 pod_ready.go:81] duration metric: took 6.6458ms for pod "kube-proxy-d986q" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:27.536868   10004 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:27.536868   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-129600
	I0513 22:43:27.536868   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.536868   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.536868   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.537477   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:27.537477   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.537477   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.537477   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.537477   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.537477   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:27 GMT
	I0513 22:43:27.537477   10004 round_trippers.go:580]     Audit-Id: 5b882747-2c42-4d61-85df-2e6619f14ebf
	I0513 22:43:27.537477   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.540038   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-129600","namespace":"kube-system","uid":"de7f847c-b5de-41b8-8f77-0f55588ac955","resourceVersion":"562","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"57d6c40a35e855656f010e0ef80efa57","kubernetes.io/config.mirror":"57d6c40a35e855656f010e0ef80efa57","kubernetes.io/config.seen":"2024-05-13T22:41:20.783396625Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5202 chars]
	I0513 22:43:27.540038   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:27.540038   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.540038   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.540568   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.540731   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:27.540731   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.540731   10004 round_trippers.go:580]     Audit-Id: 5df6e030-4544-414e-901a-069d5e00fde2
	I0513 22:43:27.540731   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.540731   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.540731   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.540731   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.540731   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:27 GMT
	I0513 22:43:27.543298   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:27.543298   10004 pod_ready.go:92] pod "kube-scheduler-functional-129600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:27.543298   10004 pod_ready.go:81] duration metric: took 6.4297ms for pod "kube-scheduler-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:27.543298   10004 pod_ready.go:38] duration metric: took 13.1027395s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 22:43:27.543298   10004 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0513 22:43:27.558347   10004 command_runner.go:130] > -16
	I0513 22:43:27.558414   10004 ops.go:34] apiserver oom_adj: -16
	I0513 22:43:27.558478   10004 kubeadm.go:591] duration metric: took 22.6478657s to restartPrimaryControlPlane
	I0513 22:43:27.558541   10004 kubeadm.go:393] duration metric: took 22.701011s to StartCluster
	I0513 22:43:27.558612   10004 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:43:27.558777   10004 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:43:27.560026   10004 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:43:27.561507   10004 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.102.96 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 22:43:27.561507   10004 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 22:43:27.561592   10004 addons.go:69] Setting storage-provisioner=true in profile "functional-129600"
	I0513 22:43:27.561697   10004 addons.go:234] Setting addon storage-provisioner=true in "functional-129600"
	I0513 22:43:27.571271   10004 out.go:177] * Verifying Kubernetes components...
	I0513 22:43:27.561697   10004 addons.go:69] Setting default-storageclass=true in profile "functional-129600"
	W0513 22:43:27.561768   10004 addons.go:243] addon storage-provisioner should already be in state true
	I0513 22:43:27.561768   10004 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:43:27.571976   10004 host.go:66] Checking if "functional-129600" exists ...
	I0513 22:43:27.571976   10004 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-129600"
	I0513 22:43:27.575203   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:43:27.575835   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:43:27.585692   10004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:43:27.852489   10004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 22:43:27.877180   10004 node_ready.go:35] waiting up to 6m0s for node "functional-129600" to be "Ready" ...
	I0513 22:43:27.877180   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:27.877180   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.877180   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.877180   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.880285   10004 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 22:43:27.880285   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.880285   10004 round_trippers.go:580]     Audit-Id: a838ae27-0717-4f4c-8801-d7b2ce5db3b0
	I0513 22:43:27.880285   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.880285   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.880285   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.880285   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.880285   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:28 GMT
	I0513 22:43:27.881645   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:27.882122   10004 node_ready.go:49] node "functional-129600" has status "Ready":"True"
	I0513 22:43:27.882152   10004 node_ready.go:38] duration metric: took 4.9724ms for node "functional-129600" to be "Ready" ...
	I0513 22:43:27.882185   10004 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 22:43:27.882281   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods
	I0513 22:43:27.882307   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.882307   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.882307   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.883758   10004 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 22:43:27.883758   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.883758   10004 round_trippers.go:580]     Audit-Id: 428e88cf-826e-468e-8484-f6f3123c654b
	I0513 22:43:27.883758   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.883758   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.883758   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.886541   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.886541   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:28 GMT
	I0513 22:43:27.887209   10004 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"570"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"502","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50098 chars]
	I0513 22:43:27.889064   10004 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hgbp9" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:27.889064   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hgbp9
	I0513 22:43:27.889064   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.889064   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.889064   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.890641   10004 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 22:43:27.890641   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.890641   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.890641   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:28 GMT
	I0513 22:43:27.890641   10004 round_trippers.go:580]     Audit-Id: 0c1061da-ae85-4093-b32f-7b68b6dcdd4b
	I0513 22:43:27.890641   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.890641   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.890641   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.892427   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"502","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0513 22:43:27.912449   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:27.912532   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:27.912532   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:27.912532   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:27.915868   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:27.915868   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:27.915868   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:28 GMT
	I0513 22:43:27.915868   10004 round_trippers.go:580]     Audit-Id: c02194f2-063f-4069-8ed5-97036ae5c9ac
	I0513 22:43:27.915961   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:27.915961   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:27.915961   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:27.916002   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:27.916820   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:27.916961   10004 pod_ready.go:92] pod "coredns-7db6d8ff4d-hgbp9" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:27.916961   10004 pod_ready.go:81] duration metric: took 27.8965ms for pod "coredns-7db6d8ff4d-hgbp9" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:27.916961   10004 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:28.104163   10004 request.go:629] Waited for 187.1964ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/etcd-functional-129600
	I0513 22:43:28.104423   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/etcd-functional-129600
	I0513 22:43:28.104423   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:28.104423   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:28.104423   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:28.105084   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:28.108260   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:28.108260   10004 round_trippers.go:580]     Audit-Id: 6da7c66c-e100-44dd-b9f5-741bfbde858f
	I0513 22:43:28.108260   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:28.108260   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:28.108260   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:28.108260   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:28.108260   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:28 GMT
	I0513 22:43:28.108260   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-129600","namespace":"kube-system","uid":"7b41cd03-8c9b-497e-b568-e9854da00b7f","resourceVersion":"558","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.96:2379","kubernetes.io/config.hash":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.mirror":"d734a16bb20fb94a6b7f5aa563a2e46d","kubernetes.io/config.seen":"2024-05-13T22:41:20.783390925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6373 chars]
	I0513 22:43:28.313440   10004 request.go:629] Waited for 204.4155ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:28.313901   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:28.313901   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:28.313901   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:28.313901   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:28.317069   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:28.317069   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:28.317069   10004 round_trippers.go:580]     Audit-Id: 4c058279-a681-49a7-a31a-9eb88f8f876e
	I0513 22:43:28.317069   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:28.317069   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:28.317069   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:28.317069   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:28.317069   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:28 GMT
	I0513 22:43:28.317069   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:28.317746   10004 pod_ready.go:92] pod "etcd-functional-129600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:28.317746   10004 pod_ready.go:81] duration metric: took 400.7736ms for pod "etcd-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:28.317746   10004 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:28.505882   10004 request.go:629] Waited for 188.0481ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:28.505882   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-129600
	I0513 22:43:28.505882   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:28.505882   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:28.505882   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:28.506501   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:28.509910   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:28.509910   10004 round_trippers.go:580]     Audit-Id: 4d1e8b1e-09d7-4959-b2ea-e2c0f9b5be8c
	I0513 22:43:28.509910   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:28.509910   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:28.509910   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:28.509910   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:28.509910   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:28 GMT
	I0513 22:43:28.510042   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-129600","namespace":"kube-system","uid":"aaf5324c-fc6b-49af-8b7b-447cbddba2b5","resourceVersion":"570","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.96:8441","kubernetes.io/config.hash":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.mirror":"87374ed397f34cf260fe43ba53316d2f","kubernetes.io/config.seen":"2024-05-13T22:41:20.783394525Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7904 chars]
	I0513 22:43:28.709810   10004 request.go:629] Waited for 199.0061ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:28.710021   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:28.710021   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:28.710021   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:28.710021   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:28.713933   10004 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 22:43:28.713933   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:28.713933   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:28 GMT
	I0513 22:43:28.713933   10004 round_trippers.go:580]     Audit-Id: 7bae0d79-5ca1-4fba-8af5-293c157c20ff
	I0513 22:43:28.713933   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:28.713933   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:28.714033   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:28.714033   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:28.714033   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:28.714690   10004 pod_ready.go:92] pod "kube-apiserver-functional-129600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:28.714690   10004 pod_ready.go:81] duration metric: took 396.9319ms for pod "kube-apiserver-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:28.714690   10004 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:28.903732   10004 request.go:629] Waited for 188.891ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-129600
	I0513 22:43:28.903732   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-129600
	I0513 22:43:28.903732   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:28.903732   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:28.903732   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:28.904393   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:28.907821   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:28.907821   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:28.907821   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:29 GMT
	I0513 22:43:28.907821   10004 round_trippers.go:580]     Audit-Id: 27d866a4-104d-4801-a254-0cd48880c079
	I0513 22:43:28.907821   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:28.907821   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:28.907821   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:28.908331   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-129600","namespace":"kube-system","uid":"02095aff-5f3d-4d58-907a-8ced211397b9","resourceVersion":"568","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"46a88326cb15a7a0288b3c5bb493d896","kubernetes.io/config.mirror":"46a88326cb15a7a0288b3c5bb493d896","kubernetes.io/config.seen":"2024-05-13T22:41:20.783395725Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0513 22:43:29.111114   10004 request.go:629] Waited for 201.6334ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:29.111274   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:29.111274   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:29.111274   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:29.111274   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:29.111938   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:29.111938   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:29.111938   10004 round_trippers.go:580]     Audit-Id: cc05b662-6ac0-497b-a2dd-d33ab06bcb7b
	I0513 22:43:29.111938   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:29.111938   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:29.111938   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:29.111938   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:29.111938   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:29 GMT
	I0513 22:43:29.115434   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:29.115866   10004 pod_ready.go:92] pod "kube-controller-manager-functional-129600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:29.115866   10004 pod_ready.go:81] duration metric: took 401.1643ms for pod "kube-controller-manager-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:29.115866   10004 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d986q" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:29.304137   10004 request.go:629] Waited for 188.196ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-proxy-d986q
	I0513 22:43:29.304393   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-proxy-d986q
	I0513 22:43:29.304393   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:29.304393   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:29.304393   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:29.311754   10004 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 22:43:29.311754   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:29.311754   10004 round_trippers.go:580]     Audit-Id: 58ae69cb-9df2-4ec4-a7f7-ade1b42ced21
	I0513 22:43:29.311754   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:29.311754   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:29.311754   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:29.311754   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:29.311754   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:29 GMT
	I0513 22:43:29.312387   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d986q","generateName":"kube-proxy-","namespace":"kube-system","uid":"a65bf6f4-02c7-4c6c-a145-4b4a1fa636f4","resourceVersion":"501","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"551dbc0f-be9e-44ad-b58c-a064f3c5df59","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"551dbc0f-be9e-44ad-b58c-a064f3c5df59\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0513 22:43:29.498337   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:43:29.498337   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:43:29.500859   10004 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 22:43:29.503349   10004 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 22:43:29.503349   10004 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 22:43:29.503349   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:43:29.506016   10004 request.go:629] Waited for 193.4073ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:29.506016   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:29.506016   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:29.506016   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:29.506016   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:29.511575   10004 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 22:43:29.511575   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:29.511575   10004 round_trippers.go:580]     Audit-Id: 9b764ccb-1e24-4774-a5b4-5d06f5ef0a9f
	I0513 22:43:29.511575   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:29.511575   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:29.512121   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:29.512121   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:29.512121   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:29 GMT
	I0513 22:43:29.513296   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:29.513296   10004 pod_ready.go:92] pod "kube-proxy-d986q" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:29.513296   10004 pod_ready.go:81] duration metric: took 397.4191ms for pod "kube-proxy-d986q" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:29.513296   10004 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:29.545409   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:43:29.545409   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:43:29.546005   10004 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:43:29.546591   10004 kapi.go:59] client config for functional-129600: &rest.Config{Host:"https://172.23.102.96:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-129600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-129600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 22:43:29.547143   10004 addons.go:234] Setting addon default-storageclass=true in "functional-129600"
	W0513 22:43:29.547143   10004 addons.go:243] addon default-storageclass should already be in state true
	I0513 22:43:29.547319   10004 host.go:66] Checking if "functional-129600" exists ...
	I0513 22:43:29.547906   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:43:29.704540   10004 request.go:629] Waited for 190.5029ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-129600
	I0513 22:43:29.704540   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-129600
	I0513 22:43:29.704783   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:29.704783   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:29.704783   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:29.705064   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:29.705064   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:29.705064   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:29.705064   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:29.705064   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:29.705064   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:29 GMT
	I0513 22:43:29.705064   10004 round_trippers.go:580]     Audit-Id: 555eb056-0822-4c01-b74f-fbe54effc253
	I0513 22:43:29.705064   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:29.708703   10004 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-129600","namespace":"kube-system","uid":"de7f847c-b5de-41b8-8f77-0f55588ac955","resourceVersion":"562","creationTimestamp":"2024-05-13T22:41:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"57d6c40a35e855656f010e0ef80efa57","kubernetes.io/config.mirror":"57d6c40a35e855656f010e0ef80efa57","kubernetes.io/config.seen":"2024-05-13T22:41:20.783396625Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5202 chars]
	I0513 22:43:29.912683   10004 request.go:629] Waited for 203.9252ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:29.912930   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes/functional-129600
	I0513 22:43:29.912990   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:29.912990   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:29.912990   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:29.913823   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:29.917757   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:29.917757   10004 round_trippers.go:580]     Audit-Id: 26e77f2c-74a1-450d-a0af-7963527ac4cf
	I0513 22:43:29.917757   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:29.917757   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:29.917843   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:29.917843   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:29.917843   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:30 GMT
	I0513 22:43:29.917997   10004 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-13T22:41:17Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0513 22:43:29.919034   10004 pod_ready.go:92] pod "kube-scheduler-functional-129600" in "kube-system" namespace has status "Ready":"True"
	I0513 22:43:29.919034   10004 pod_ready.go:81] duration metric: took 405.7264ms for pod "kube-scheduler-functional-129600" in "kube-system" namespace to be "Ready" ...
	I0513 22:43:29.919034   10004 pod_ready.go:38] duration metric: took 2.0367901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 22:43:29.919034   10004 api_server.go:52] waiting for apiserver process to appear ...
	I0513 22:43:29.928587   10004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 22:43:29.950445   10004 command_runner.go:130] > 4623
	I0513 22:43:29.950445   10004 api_server.go:72] duration metric: took 2.3887839s to wait for apiserver process to appear ...
	I0513 22:43:29.950445   10004 api_server.go:88] waiting for apiserver healthz status ...
	I0513 22:43:29.950445   10004 api_server.go:253] Checking apiserver healthz at https://172.23.102.96:8441/healthz ...
	I0513 22:43:29.963094   10004 api_server.go:279] https://172.23.102.96:8441/healthz returned 200:
	ok
	I0513 22:43:29.963258   10004 round_trippers.go:463] GET https://172.23.102.96:8441/version
	I0513 22:43:29.963258   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:29.963344   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:29.963344   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:29.963594   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:29.965111   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:29.965111   10004 round_trippers.go:580]     Content-Length: 263
	I0513 22:43:29.965111   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:30 GMT
	I0513 22:43:29.965111   10004 round_trippers.go:580]     Audit-Id: 0ca003c6-6173-490b-b3a7-bf491b8fe25d
	I0513 22:43:29.965111   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:29.965183   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:29.965183   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:29.965183   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:29.965183   10004 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0513 22:43:29.965255   10004 api_server.go:141] control plane version: v1.30.0
	I0513 22:43:29.965318   10004 api_server.go:131] duration metric: took 14.8721ms to wait for apiserver health ...
	I0513 22:43:29.965379   10004 system_pods.go:43] waiting for kube-system pods to appear ...
	I0513 22:43:30.110886   10004 request.go:629] Waited for 145.3105ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods
	I0513 22:43:30.110886   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods
	I0513 22:43:30.111075   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:30.111075   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:30.111075   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:30.116956   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:30.117021   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:30.117021   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:30.117021   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:30.117021   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:30.117021   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:30.117021   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:30 GMT
	I0513 22:43:30.117021   10004 round_trippers.go:580]     Audit-Id: 983a2c4f-3c87-44dd-a656-82b6869e617b
	I0513 22:43:30.117743   10004 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"570"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"502","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50098 chars]
	I0513 22:43:30.120035   10004 system_pods.go:59] 7 kube-system pods found
	I0513 22:43:30.120108   10004 system_pods.go:61] "coredns-7db6d8ff4d-hgbp9" [ede517b1-d13d-4817-8f90-401820281717] Running
	I0513 22:43:30.120108   10004 system_pods.go:61] "etcd-functional-129600" [7b41cd03-8c9b-497e-b568-e9854da00b7f] Running
	I0513 22:43:30.120108   10004 system_pods.go:61] "kube-apiserver-functional-129600" [aaf5324c-fc6b-49af-8b7b-447cbddba2b5] Running
	I0513 22:43:30.120108   10004 system_pods.go:61] "kube-controller-manager-functional-129600" [02095aff-5f3d-4d58-907a-8ced211397b9] Running
	I0513 22:43:30.120108   10004 system_pods.go:61] "kube-proxy-d986q" [a65bf6f4-02c7-4c6c-a145-4b4a1fa636f4] Running
	I0513 22:43:30.120108   10004 system_pods.go:61] "kube-scheduler-functional-129600" [de7f847c-b5de-41b8-8f77-0f55588ac955] Running
	I0513 22:43:30.120108   10004 system_pods.go:61] "storage-provisioner" [1bab2554-ed75-4ec0-a1a0-bff155677696] Running
	I0513 22:43:30.120108   10004 system_pods.go:74] duration metric: took 154.7252ms to wait for pod list to return data ...
	I0513 22:43:30.120108   10004 default_sa.go:34] waiting for default service account to be created ...
	I0513 22:43:30.306199   10004 request.go:629] Waited for 186.085ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/namespaces/default/serviceaccounts
	I0513 22:43:30.306199   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/default/serviceaccounts
	I0513 22:43:30.306199   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:30.306199   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:30.306199   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:30.310549   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:30.310549   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:30.310609   10004 round_trippers.go:580]     Audit-Id: 9064623b-3470-47d0-abde-5e8065e9488e
	I0513 22:43:30.310609   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:30.310609   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:30.310609   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:30.310609   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:30.310609   10004 round_trippers.go:580]     Content-Length: 261
	I0513 22:43:30.310609   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:30 GMT
	I0513 22:43:30.310609   10004 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"570"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"16a279f8-c75d-4cc0-a9bd-aa460b692bf9","resourceVersion":"337","creationTimestamp":"2024-05-13T22:41:33Z"}}]}
	I0513 22:43:30.310609   10004 default_sa.go:45] found service account: "default"
	I0513 22:43:30.310609   10004 default_sa.go:55] duration metric: took 190.495ms for default service account to be created ...
	I0513 22:43:30.310609   10004 system_pods.go:116] waiting for k8s-apps to be running ...
	I0513 22:43:30.507718   10004 request.go:629] Waited for 197.1036ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods
	I0513 22:43:30.507718   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/namespaces/kube-system/pods
	I0513 22:43:30.507718   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:30.507718   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:30.507718   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:30.515276   10004 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 22:43:30.515276   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:30.515276   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:30.515276   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:30.515276   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:30 GMT
	I0513 22:43:30.515276   10004 round_trippers.go:580]     Audit-Id: ff1f2059-f5c7-4df4-bda6-ee09b22da750
	I0513 22:43:30.515276   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:30.515276   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:30.515472   10004 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"570"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hgbp9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"ede517b1-d13d-4817-8f90-401820281717","resourceVersion":"502","creationTimestamp":"2024-05-13T22:41:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T22:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e9baa3c-7ae2-47ac-b3d8-869faf2bb132\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50098 chars]
	I0513 22:43:30.518091   10004 system_pods.go:86] 7 kube-system pods found
	I0513 22:43:30.518140   10004 system_pods.go:89] "coredns-7db6d8ff4d-hgbp9" [ede517b1-d13d-4817-8f90-401820281717] Running
	I0513 22:43:30.518140   10004 system_pods.go:89] "etcd-functional-129600" [7b41cd03-8c9b-497e-b568-e9854da00b7f] Running
	I0513 22:43:30.518140   10004 system_pods.go:89] "kube-apiserver-functional-129600" [aaf5324c-fc6b-49af-8b7b-447cbddba2b5] Running
	I0513 22:43:30.518140   10004 system_pods.go:89] "kube-controller-manager-functional-129600" [02095aff-5f3d-4d58-907a-8ced211397b9] Running
	I0513 22:43:30.518140   10004 system_pods.go:89] "kube-proxy-d986q" [a65bf6f4-02c7-4c6c-a145-4b4a1fa636f4] Running
	I0513 22:43:30.518140   10004 system_pods.go:89] "kube-scheduler-functional-129600" [de7f847c-b5de-41b8-8f77-0f55588ac955] Running
	I0513 22:43:30.518199   10004 system_pods.go:89] "storage-provisioner" [1bab2554-ed75-4ec0-a1a0-bff155677696] Running
	I0513 22:43:30.518199   10004 system_pods.go:126] duration metric: took 207.5839ms to wait for k8s-apps to be running ...
	I0513 22:43:30.518199   10004 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 22:43:30.526437   10004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 22:43:30.547510   10004 system_svc.go:56] duration metric: took 29.3103ms WaitForService to wait for kubelet
	I0513 22:43:30.550981   10004 kubeadm.go:576] duration metric: took 2.9893029s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 22:43:30.551047   10004 node_conditions.go:102] verifying NodePressure condition ...
	I0513 22:43:30.704354   10004 request.go:629] Waited for 152.9974ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.96:8441/api/v1/nodes
	I0513 22:43:30.704354   10004 round_trippers.go:463] GET https://172.23.102.96:8441/api/v1/nodes
	I0513 22:43:30.704354   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:30.704572   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:30.704572   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:30.704730   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:30.708641   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:30.708641   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:30 GMT
	I0513 22:43:30.708641   10004 round_trippers.go:580]     Audit-Id: 572a94c3-6dd4-4c84-be7f-d911259cd8ba
	I0513 22:43:30.708641   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:30.708641   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:30.708641   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:30.708641   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:30.708971   10004 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"570"},"items":[{"metadata":{"name":"functional-129600","uid":"e934fd36-e9fc-465c-8689-9dabc08c3e0d","resourceVersion":"484","creationTimestamp":"2024-05-13T22:41:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-129600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"functional-129600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T22_41_21_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0513 22:43:30.709460   10004 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 22:43:30.709460   10004 node_conditions.go:123] node cpu capacity is 2
	I0513 22:43:30.709460   10004 node_conditions.go:105] duration metric: took 158.4084ms to run NodePressure ...
	I0513 22:43:30.709460   10004 start.go:240] waiting for startup goroutines ...
	I0513 22:43:31.455965   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:43:31.455965   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:43:31.455965   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:43:31.475266   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:43:31.484748   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:43:31.484748   10004 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 22:43:31.484748   10004 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 22:43:31.484748   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
	I0513 22:43:33.416894   10004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:43:33.416894   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:43:33.416894   10004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
	I0513 22:43:33.773954   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:43:33.774165   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:43:33.774389   10004 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
	I0513 22:43:33.895397   10004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 22:43:34.587121   10004 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0513 22:43:34.589714   10004 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0513 22:43:34.589714   10004 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0513 22:43:34.589714   10004 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0513 22:43:34.589714   10004 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0513 22:43:34.589714   10004 command_runner.go:130] > pod/storage-provisioner configured
	I0513 22:43:35.650362   10004 main.go:141] libmachine: [stdout =====>] : 172.23.102.96
	
	I0513 22:43:35.659739   10004 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:43:35.659739   10004 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
	I0513 22:43:35.781883   10004 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 22:43:35.908241   10004 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0513 22:43:35.913776   10004 round_trippers.go:463] GET https://172.23.102.96:8441/apis/storage.k8s.io/v1/storageclasses
	I0513 22:43:35.913867   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:35.913922   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:35.913955   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:35.914334   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:35.914334   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:35.914334   10004 round_trippers.go:580]     Content-Length: 1273
	I0513 22:43:35.914334   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:36 GMT
	I0513 22:43:35.914334   10004 round_trippers.go:580]     Audit-Id: 64332b98-d6e8-4b1f-8d8c-486732d64f6f
	I0513 22:43:35.914334   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:35.914334   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:35.914334   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:35.914334   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:35.914334   10004 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"577"},"items":[{"metadata":{"name":"standard","uid":"cd834fab-969e-48d4-886f-57f0accde2df","resourceVersion":"431","creationTimestamp":"2024-05-13T22:41:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-13T22:41:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0513 22:43:35.917855   10004 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"cd834fab-969e-48d4-886f-57f0accde2df","resourceVersion":"431","creationTimestamp":"2024-05-13T22:41:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-13T22:41:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0513 22:43:35.917954   10004 round_trippers.go:463] PUT https://172.23.102.96:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0513 22:43:35.917954   10004 round_trippers.go:469] Request Headers:
	I0513 22:43:35.917954   10004 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:43:35.917954   10004 round_trippers.go:473]     Content-Type: application/json
	I0513 22:43:35.918004   10004 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:43:35.918635   10004 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0513 22:43:35.918635   10004 round_trippers.go:577] Response Headers:
	I0513 22:43:35.918635   10004 round_trippers.go:580]     Audit-Id: fe72dbe1-ea2d-4a3e-b6c1-751968ae729c
	I0513 22:43:35.918635   10004 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 22:43:35.918635   10004 round_trippers.go:580]     Content-Type: application/json
	I0513 22:43:35.918635   10004 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8f69758d-64fc-48da-afc7-63e51f240be8
	I0513 22:43:35.918635   10004 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: dca4b94e-4137-4722-ae41-318ab4cf0e34
	I0513 22:43:35.918635   10004 round_trippers.go:580]     Content-Length: 1220
	I0513 22:43:35.918635   10004 round_trippers.go:580]     Date: Mon, 13 May 2024 22:43:36 GMT
	I0513 22:43:35.921513   10004 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"cd834fab-969e-48d4-886f-57f0accde2df","resourceVersion":"431","creationTimestamp":"2024-05-13T22:41:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-13T22:41:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0513 22:43:35.925275   10004 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0513 22:43:35.929454   10004 addons.go:505] duration metric: took 8.3678255s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0513 22:43:35.929454   10004 start.go:245] waiting for cluster config update ...
	I0513 22:43:35.929454   10004 start.go:254] writing updated cluster config ...
	I0513 22:43:35.937990   10004 ssh_runner.go:195] Run: rm -f paused
	I0513 22:43:36.065510   10004 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0513 22:43:36.071955   10004 out.go:177] * Done! kubectl is now configured to use "functional-129600" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 13 22:43:13 functional-129600 dockerd[3853]: time="2024-05-13T22:43:13.909236634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:13 functional-129600 dockerd[3853]: time="2024-05-13T22:43:13.909419044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:13 functional-129600 dockerd[3853]: time="2024-05-13T22:43:13.918152163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:43:13 functional-129600 dockerd[3853]: time="2024-05-13T22:43:13.919231427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:43:13 functional-129600 dockerd[3853]: time="2024-05-13T22:43:13.919319632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:13 functional-129600 dockerd[3853]: time="2024-05-13T22:43:13.919411938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:14 functional-129600 cri-dockerd[4069]: time="2024-05-13T22:43:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f1c8b633592a0da82a02177dd6c059a24d36cb2b234381f0f7abeda2fa6aa0c7/resolv.conf as [nameserver 172.23.96.1]"
	May 13 22:43:14 functional-129600 cri-dockerd[4069]: time="2024-05-13T22:43:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c0d179fb34834ee397b1d98f95c557340ee33a7eda39bb1572f3ffc42a3a5a2/resolv.conf as [nameserver 172.23.96.1]"
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.236293225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.236688248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.236832657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.237170476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.304049682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.305060441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.309395895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.312490475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.508038597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.508305512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.508477622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:14 functional-129600 dockerd[3853]: time="2024-05-13T22:43:14.508715736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:14 functional-129600 cri-dockerd[4069]: time="2024-05-13T22:43:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/70bb3a6c3b2ea825fe3fd2977a22564a9e18591d9598c6ec101cf56901bd3d1d/resolv.conf as [nameserver 172.23.96.1]"
	May 13 22:43:15 functional-129600 dockerd[3853]: time="2024-05-13T22:43:15.016112319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:43:15 functional-129600 dockerd[3853]: time="2024-05-13T22:43:15.016182741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:43:15 functional-129600 dockerd[3853]: time="2024-05-13T22:43:15.016199046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:43:15 functional-129600 dockerd[3853]: time="2024-05-13T22:43:15.016925173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fc6bd5eb22659       cbb01a7bd410d       About a minute ago   Running             coredns                   1                   70bb3a6c3b2ea       coredns-7db6d8ff4d-hgbp9
	a882e396c2b14       6e38f40d628db       About a minute ago   Running             storage-provisioner       1                   5c0d179fb3483       storage-provisioner
	e9a30311da6b1       a0bf559e280cf       About a minute ago   Running             kube-proxy                1                   f1c8b633592a0       kube-proxy-d986q
	15f55bf0bc204       259c8277fcbbc       About a minute ago   Running             kube-scheduler            1                   5f6e3c80f32f3       kube-scheduler-functional-129600
	4fa67a95c8c20       3861cfcd7c04c       About a minute ago   Running             etcd                      1                   8c6bd9c87a28e       etcd-functional-129600
	d7e27e48a22b0       c42f13656d0b2       About a minute ago   Running             kube-apiserver            1                   4e35e84e895a5       kube-apiserver-functional-129600
	b5f8ea2af7566       c7aad43836fa5       About a minute ago   Running             kube-controller-manager   1                   3e224c48c35bf       kube-controller-manager-functional-129600
	d828f53c208c9       6e38f40d628db       3 minutes ago        Exited              storage-provisioner       0                   3bbe5ad0be35a       storage-provisioner
	76a9d4b6e76bb       cbb01a7bd410d       3 minutes ago        Exited              coredns                   0                   cc0c521935e33       coredns-7db6d8ff4d-hgbp9
	3fdecc4037c0e       a0bf559e280cf       3 minutes ago        Exited              kube-proxy                0                   4459fa5b13456       kube-proxy-d986q
	198ce71b58930       259c8277fcbbc       3 minutes ago        Exited              kube-scheduler            0                   0cf9b41c66886       kube-scheduler-functional-129600
	4fb73e9cd2aef       c7aad43836fa5       3 minutes ago        Exited              kube-controller-manager   0                   5f25f2ca9c5e3       kube-controller-manager-functional-129600
	56967669b9e1c       c42f13656d0b2       3 minutes ago        Exited              kube-apiserver            0                   6ad63ef84a17d       kube-apiserver-functional-129600
	e8fcd48526418       3861cfcd7c04c       3 minutes ago        Exited              etcd                      0                   4da60f423131a       etcd-functional-129600
	
	
	==> coredns [76a9d4b6e76b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54894 - 52076 "HINFO IN 8802842854556076634.8034353442336985976. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02876121s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fc6bd5eb2265] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57709 - 43422 "HINFO IN 9096425013368027097.6309113381320582622. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024345984s
	
	
	==> describe nodes <==
	Name:               functional-129600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-129600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=functional-129600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_13T22_41_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 22:41:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-129600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 22:45:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 22:44:44 +0000   Mon, 13 May 2024 22:41:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 22:44:44 +0000   Mon, 13 May 2024 22:41:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 22:44:44 +0000   Mon, 13 May 2024 22:41:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 22:44:44 +0000   Mon, 13 May 2024 22:41:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.102.96
	  Hostname:    functional-129600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 b05b6ea6ac6a4969ac359012e3416ec6
	  System UUID:                57191fce-bba2-0c48-9b18-22ded25ad4c7
	  Boot ID:                    77ede619-9844-4334-b755-0033a87c6b48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-hgbp9                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m33s
	  kube-system                 etcd-functional-129600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-apiserver-functional-129600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-controller-manager-functional-129600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-d986q                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 kube-scheduler-functional-129600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m31s                  kube-proxy       
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 3m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node functional-129600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node functional-129600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node functional-129600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    3m47s                  kubelet          Node functional-129600 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  3m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m47s                  kubelet          Node functional-129600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m47s                  kubelet          Node functional-129600 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal  NodeReady                3m42s                  kubelet          Node functional-129600 status is now: NodeReady
	  Normal  RegisteredNode           3m34s                  node-controller  Node functional-129600 event: Registered Node functional-129600 in Controller
	  Normal  Starting                 2m                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)        kubelet          Node functional-129600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)        kubelet          Node functional-129600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)        kubelet          Node functional-129600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           103s                   node-controller  Node functional-129600 event: Registered Node functional-129600 in Controller
	
	
	==> dmesg <==
	[  +0.080261] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.184827] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.590624] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +4.762745] systemd-fstab-generator[1707]: Ignoring "noauto" option for root device
	[  +0.076264] kauditd_printk_skb: 51 callbacks suppressed
	[  +7.510507] systemd-fstab-generator[2117]: Ignoring "noauto" option for root device
	[  +0.109424] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.273878] systemd-fstab-generator[2335]: Ignoring "noauto" option for root device
	[  +0.171608] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.201023] kauditd_printk_skb: 71 callbacks suppressed
	[May13 22:42] systemd-fstab-generator[3370]: Ignoring "noauto" option for root device
	[  +0.514129] systemd-fstab-generator[3406]: Ignoring "noauto" option for root device
	[  +0.228701] systemd-fstab-generator[3418]: Ignoring "noauto" option for root device
	[  +0.242282] systemd-fstab-generator[3432]: Ignoring "noauto" option for root device
	[  +5.274491] kauditd_printk_skb: 89 callbacks suppressed
	[May13 22:43] systemd-fstab-generator[4021]: Ignoring "noauto" option for root device
	[  +0.159442] systemd-fstab-generator[4033]: Ignoring "noauto" option for root device
	[  +0.180889] systemd-fstab-generator[4045]: Ignoring "noauto" option for root device
	[  +0.245952] systemd-fstab-generator[4061]: Ignoring "noauto" option for root device
	[  +0.718730] systemd-fstab-generator[4214]: Ignoring "noauto" option for root device
	[  +3.117479] systemd-fstab-generator[4331]: Ignoring "noauto" option for root device
	[  +0.082800] kauditd_printk_skb: 140 callbacks suppressed
	[  +6.731049] kauditd_printk_skb: 52 callbacks suppressed
	[ +10.964256] kauditd_printk_skb: 31 callbacks suppressed
	[  +2.864709] systemd-fstab-generator[5223]: Ignoring "noauto" option for root device
	
	
	==> etcd [4fa67a95c8c2] <==
	{"level":"info","ts":"2024-05-13T22:43:09.270449Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"58b31e49d0cfa874","local-member-id":"930d47c3955e009a","added-peer-id":"930d47c3955e009a","added-peer-peer-urls":["https://172.23.102.96:2380"]}
	{"level":"info","ts":"2024-05-13T22:43:09.27544Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"58b31e49d0cfa874","local-member-id":"930d47c3955e009a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T22:43:09.275641Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T22:43:09.276823Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-13T22:43:09.27723Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-13T22:43:09.277415Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-13T22:43:09.289051Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-13T22:43:09.289708Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"930d47c3955e009a","initial-advertise-peer-urls":["https://172.23.102.96:2380"],"listen-peer-urls":["https://172.23.102.96:2380"],"advertise-client-urls":["https://172.23.102.96:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.96:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-13T22:43:09.292391Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-13T22:43:09.2894Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.102.96:2380"}
	{"level":"info","ts":"2024-05-13T22:43:09.292717Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.102.96:2380"}
	{"level":"info","ts":"2024-05-13T22:43:11.109637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"930d47c3955e009a is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-13T22:43:11.10968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"930d47c3955e009a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-13T22:43:11.109724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"930d47c3955e009a received MsgPreVoteResp from 930d47c3955e009a at term 2"}
	{"level":"info","ts":"2024-05-13T22:43:11.109736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"930d47c3955e009a became candidate at term 3"}
	{"level":"info","ts":"2024-05-13T22:43:11.109742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"930d47c3955e009a received MsgVoteResp from 930d47c3955e009a at term 3"}
	{"level":"info","ts":"2024-05-13T22:43:11.109761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"930d47c3955e009a became leader at term 3"}
	{"level":"info","ts":"2024-05-13T22:43:11.109768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 930d47c3955e009a elected leader 930d47c3955e009a at term 3"}
	{"level":"info","ts":"2024-05-13T22:43:11.126908Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"930d47c3955e009a","local-member-attributes":"{Name:functional-129600 ClientURLs:[https://172.23.102.96:2379]}","request-path":"/0/members/930d47c3955e009a/attributes","cluster-id":"58b31e49d0cfa874","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-13T22:43:11.128785Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-13T22:43:11.129088Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-13T22:43:11.129275Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-13T22:43:11.12929Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-13T22:43:11.131858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.102.96:2379"}
	{"level":"info","ts":"2024-05-13T22:43:11.135636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e8fcd4852641] <==
	{"level":"info","ts":"2024-05-13T22:41:15.14319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"930d47c3955e009a became candidate at term 2"}
	{"level":"info","ts":"2024-05-13T22:41:15.143295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"930d47c3955e009a received MsgVoteResp from 930d47c3955e009a at term 2"}
	{"level":"info","ts":"2024-05-13T22:41:15.143427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"930d47c3955e009a became leader at term 2"}
	{"level":"info","ts":"2024-05-13T22:41:15.14354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 930d47c3955e009a elected leader 930d47c3955e009a at term 2"}
	{"level":"info","ts":"2024-05-13T22:41:15.149885Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T22:41:15.154997Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"930d47c3955e009a","local-member-attributes":"{Name:functional-129600 ClientURLs:[https://172.23.102.96:2379]}","request-path":"/0/members/930d47c3955e009a/attributes","cluster-id":"58b31e49d0cfa874","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-13T22:41:15.155275Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-13T22:41:15.155852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-13T22:41:15.158067Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-13T22:41:15.158253Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-13T22:41:15.163076Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-13T22:41:15.158309Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"58b31e49d0cfa874","local-member-id":"930d47c3955e009a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T22:41:15.16077Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.102.96:2379"}
	{"level":"info","ts":"2024-05-13T22:41:15.230833Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T22:41:15.23107Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T22:42:49.825164Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-13T22:42:49.825215Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-129600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.23.102.96:2380"],"advertise-client-urls":["https://172.23.102.96:2379"]}
	{"level":"warn","ts":"2024-05-13T22:42:49.825274Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T22:42:49.825344Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T22:42:49.844776Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.23.102.96:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-13T22:42:49.844806Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.23.102.96:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-13T22:42:49.844845Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"930d47c3955e009a","current-leader-member-id":"930d47c3955e009a"}
	{"level":"info","ts":"2024-05-13T22:42:49.86474Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.23.102.96:2380"}
	{"level":"info","ts":"2024-05-13T22:42:49.865018Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.23.102.96:2380"}
	{"level":"info","ts":"2024-05-13T22:42:49.865035Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-129600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.23.102.96:2380"],"advertise-client-urls":["https://172.23.102.96:2379"]}
	
	
	==> kernel <==
	 22:45:07 up 5 min,  0 users,  load average: 0.57, 0.61, 0.29
	Linux functional-129600 5.10.207 #1 SMP Thu May 9 02:07:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [56967669b9e1] <==
	W0513 22:42:59.134416       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.162553       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.164500       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.176775       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.221954       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.281740       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.314768       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.322078       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.322101       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.339021       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.390979       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.401004       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.415207       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.416549       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.457146       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.519845       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.576902       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.580862       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.674848       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.727639       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.744945       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.786550       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.804116       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.841582       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0513 22:42:59.847337       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d7e27e48a22b] <==
	I0513 22:43:12.431938       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0513 22:43:12.432109       1 policy_source.go:224] refreshing policies
	I0513 22:43:12.432240       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0513 22:43:12.440665       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0513 22:43:12.440963       1 aggregator.go:165] initial CRD sync complete...
	I0513 22:43:12.440989       1 autoregister_controller.go:141] Starting autoregister controller
	I0513 22:43:12.440996       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0513 22:43:12.441003       1 cache.go:39] Caches are synced for autoregister controller
	I0513 22:43:12.484881       1 shared_informer.go:320] Caches are synced for configmaps
	I0513 22:43:12.485304       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0513 22:43:12.488260       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0513 22:43:12.491815       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0513 22:43:12.493419       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0513 22:43:12.493633       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0513 22:43:12.494784       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0513 22:43:12.498177       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0513 22:43:12.559186       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0513 22:43:13.287787       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0513 22:43:14.264905       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0513 22:43:14.302726       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0513 22:43:14.414305       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0513 22:43:14.521760       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0513 22:43:14.550256       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0513 22:43:24.904641       1 controller.go:615] quota admission added evaluator for: endpoints
	I0513 22:43:24.906180       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4fb73e9cd2ae] <==
	I0513 22:41:33.636796       1 shared_informer.go:320] Caches are synced for taint
	I0513 22:41:33.637138       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0513 22:41:33.637312       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-129600"
	I0513 22:41:33.639125       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0513 22:41:33.674070       1 shared_informer.go:320] Caches are synced for persistent volume
	I0513 22:41:33.674281       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0513 22:41:33.723834       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0513 22:41:33.755178       1 shared_informer.go:320] Caches are synced for HPA
	I0513 22:41:33.817845       1 shared_informer.go:320] Caches are synced for stateful set
	I0513 22:41:33.826524       1 shared_informer.go:320] Caches are synced for daemon sets
	I0513 22:41:33.834473       1 shared_informer.go:320] Caches are synced for resource quota
	I0513 22:41:33.878507       1 shared_informer.go:320] Caches are synced for resource quota
	I0513 22:41:34.261408       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 22:41:34.271013       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 22:41:34.272414       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0513 22:41:34.838406       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="520.490045ms"
	I0513 22:41:34.877546       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.971844ms"
	I0513 22:41:34.899897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.224294ms"
	I0513 22:41:34.900643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.407µs"
	I0513 22:41:36.119427       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.105µs"
	I0513 22:41:36.131464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.703µs"
	I0513 22:41:36.146469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.903µs"
	I0513 22:41:37.100709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.591µs"
	I0513 22:41:37.147265       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.591698ms"
	I0513 22:41:37.147812       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="106.783µs"
	
	
	==> kube-controller-manager [b5f8ea2af756] <==
	I0513 22:43:24.948024       1 shared_informer.go:320] Caches are synced for job
	I0513 22:43:24.954757       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0513 22:43:24.956457       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0513 22:43:24.963138       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0513 22:43:24.963319       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0513 22:43:24.972299       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0513 22:43:24.985956       1 shared_informer.go:320] Caches are synced for node
	I0513 22:43:24.986217       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0513 22:43:24.986433       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0513 22:43:24.986532       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0513 22:43:24.986571       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0513 22:43:24.990485       1 shared_informer.go:320] Caches are synced for persistent volume
	I0513 22:43:24.994226       1 shared_informer.go:320] Caches are synced for stateful set
	I0513 22:43:24.999196       1 shared_informer.go:320] Caches are synced for daemon sets
	I0513 22:43:25.105333       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0513 22:43:25.116307       1 shared_informer.go:320] Caches are synced for crt configmap
	I0513 22:43:25.119971       1 shared_informer.go:320] Caches are synced for disruption
	I0513 22:43:25.135952       1 shared_informer.go:320] Caches are synced for resource quota
	I0513 22:43:25.145223       1 shared_informer.go:320] Caches are synced for resource quota
	I0513 22:43:25.156990       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0513 22:43:25.157276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.014µs"
	I0513 22:43:25.173774       1 shared_informer.go:320] Caches are synced for deployment
	I0513 22:43:25.583268       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 22:43:25.612499       1 shared_informer.go:320] Caches are synced for garbage collector
	I0513 22:43:25.612668       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [3fdecc4037c0] <==
	I0513 22:41:35.653870       1 server_linux.go:69] "Using iptables proxy"
	I0513 22:41:35.663783       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.96"]
	I0513 22:41:35.709185       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0513 22:41:35.709296       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0513 22:41:35.709314       1 server_linux.go:165] "Using iptables Proxier"
	I0513 22:41:35.713264       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 22:41:35.714000       1 server.go:872] "Version info" version="v1.30.0"
	I0513 22:41:35.714085       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 22:41:35.715249       1 config.go:192] "Starting service config controller"
	I0513 22:41:35.715283       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 22:41:35.715457       1 config.go:101] "Starting endpoint slice config controller"
	I0513 22:41:35.715480       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 22:41:35.716083       1 config.go:319] "Starting node config controller"
	I0513 22:41:35.716112       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 22:41:35.815986       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0513 22:41:35.816031       1 shared_informer.go:320] Caches are synced for service config
	I0513 22:41:35.816325       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e9a30311da6b] <==
	I0513 22:43:14.446030       1 server_linux.go:69] "Using iptables proxy"
	I0513 22:43:14.475468       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.96"]
	I0513 22:43:14.585766       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0513 22:43:14.586503       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0513 22:43:14.586665       1 server_linux.go:165] "Using iptables Proxier"
	I0513 22:43:14.590041       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 22:43:14.593603       1 server.go:872] "Version info" version="v1.30.0"
	I0513 22:43:14.593695       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 22:43:14.594809       1 config.go:192] "Starting service config controller"
	I0513 22:43:14.594841       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 22:43:14.594904       1 config.go:101] "Starting endpoint slice config controller"
	I0513 22:43:14.594912       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 22:43:14.595293       1 config.go:319] "Starting node config controller"
	I0513 22:43:14.595321       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 22:43:14.695512       1 shared_informer.go:320] Caches are synced for node config
	I0513 22:43:14.695679       1 shared_informer.go:320] Caches are synced for service config
	I0513 22:43:14.695699       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [15f55bf0bc20] <==
	I0513 22:43:09.792313       1 serving.go:380] Generated self-signed cert in-memory
	W0513 22:43:12.351725       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0513 22:43:12.352010       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 22:43:12.352118       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0513 22:43:12.352247       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0513 22:43:12.399842       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0513 22:43:12.400038       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 22:43:12.408960       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0513 22:43:12.409203       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0513 22:43:12.409234       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0513 22:43:12.413652       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0513 22:43:12.513987       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [198ce71b5893] <==
	E0513 22:41:18.401906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0513 22:41:18.412130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 22:41:18.412166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0513 22:41:18.429160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0513 22:41:18.429347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0513 22:41:18.462275       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0513 22:41:18.462444       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0513 22:41:18.486860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0513 22:41:18.486891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0513 22:41:18.515631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0513 22:41:18.517416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0513 22:41:18.536179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0513 22:41:18.536300       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0513 22:41:18.632949       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0513 22:41:18.632988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0513 22:41:18.728653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0513 22:41:18.729044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0513 22:41:18.800582       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0513 22:41:18.800624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0513 22:41:18.873804       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 22:41:18.875538       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 22:41:19.014860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 22:41:19.014965       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0513 22:41:21.418978       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0513 22:42:49.913914       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 13 22:43:12 functional-129600 kubelet[4338]: W0513 22:43:12.395569    4338 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:functional-129600" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-129600' and this object
	May 13 22:43:12 functional-129600 kubelet[4338]: E0513 22:43:12.395688    4338 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:functional-129600" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-129600' and this object
	May 13 22:43:12 functional-129600 kubelet[4338]: W0513 22:43:12.395861    4338 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:functional-129600" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-129600' and this object
	May 13 22:43:12 functional-129600 kubelet[4338]: E0513 22:43:12.395982    4338 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:functional-129600" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-129600' and this object
	May 13 22:43:12 functional-129600 kubelet[4338]: I0513 22:43:12.449900    4338 kubelet_node_status.go:112] "Node was previously registered" node="functional-129600"
	May 13 22:43:12 functional-129600 kubelet[4338]: I0513 22:43:12.449994    4338 kubelet_node_status.go:76] "Successfully registered node" node="functional-129600"
	May 13 22:43:12 functional-129600 kubelet[4338]: I0513 22:43:12.451833    4338 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 13 22:43:12 functional-129600 kubelet[4338]: I0513 22:43:12.453227    4338 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 13 22:43:12 functional-129600 kubelet[4338]: I0513 22:43:12.483755    4338 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 13 22:43:12 functional-129600 kubelet[4338]: I0513 22:43:12.487757    4338 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a65bf6f4-02c7-4c6c-a145-4b4a1fa636f4-xtables-lock\") pod \"kube-proxy-d986q\" (UID: \"a65bf6f4-02c7-4c6c-a145-4b4a1fa636f4\") " pod="kube-system/kube-proxy-d986q"
	May 13 22:43:12 functional-129600 kubelet[4338]: I0513 22:43:12.487812    4338 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1bab2554-ed75-4ec0-a1a0-bff155677696-tmp\") pod \"storage-provisioner\" (UID: \"1bab2554-ed75-4ec0-a1a0-bff155677696\") " pod="kube-system/storage-provisioner"
	May 13 22:43:12 functional-129600 kubelet[4338]: I0513 22:43:12.487864    4338 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a65bf6f4-02c7-4c6c-a145-4b4a1fa636f4-lib-modules\") pod \"kube-proxy-d986q\" (UID: \"a65bf6f4-02c7-4c6c-a145-4b4a1fa636f4\") " pod="kube-system/kube-proxy-d986q"
	May 13 22:43:13 functional-129600 kubelet[4338]: E0513 22:43:13.488883    4338 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	May 13 22:43:13 functional-129600 kubelet[4338]: E0513 22:43:13.489052    4338 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ede517b1-d13d-4817-8f90-401820281717-config-volume podName:ede517b1-d13d-4817-8f90-401820281717 nodeName:}" failed. No retries permitted until 2024-05-13 22:43:13.989032584 +0000 UTC m=+6.734518531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ede517b1-d13d-4817-8f90-401820281717-config-volume") pod "coredns-7db6d8ff4d-hgbp9" (UID: "ede517b1-d13d-4817-8f90-401820281717") : failed to sync configmap cache: timed out waiting for the condition
	May 13 22:43:16 functional-129600 kubelet[4338]: I0513 22:43:16.847877    4338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 13 22:44:07 functional-129600 kubelet[4338]: E0513 22:44:07.471967    4338 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 22:44:07 functional-129600 kubelet[4338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 22:44:07 functional-129600 kubelet[4338]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 22:44:07 functional-129600 kubelet[4338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 22:44:07 functional-129600 kubelet[4338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 22:45:07 functional-129600 kubelet[4338]: E0513 22:45:07.477606    4338 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 22:45:07 functional-129600 kubelet[4338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 22:45:07 functional-129600 kubelet[4338]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 22:45:07 functional-129600 kubelet[4338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 22:45:07 functional-129600 kubelet[4338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [a882e396c2b1] <==
	I0513 22:43:14.485243       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0513 22:43:14.506277       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0513 22:43:14.506324       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0513 22:43:31.921494       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0513 22:43:31.921860       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-129600_51be554a-e4c7-4963-bb63-a48f772b068b!
	I0513 22:43:31.922694       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"872bd754-ac35-489f-96d1-e864cd328138", APIVersion:"v1", ResourceVersion:"571", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-129600_51be554a-e4c7-4963-bb63-a48f772b068b became leader
	I0513 22:43:32.022249       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-129600_51be554a-e4c7-4963-bb63-a48f772b068b!
	
	
	==> storage-provisioner [d828f53c208c] <==
	I0513 22:41:41.585499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0513 22:41:41.600342       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0513 22:41:41.600420       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0513 22:41:41.616120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0513 22:41:41.616440       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-129600_738d18d0-b72e-4ec3-bb1f-9902b7414073!
	I0513 22:41:41.616811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"872bd754-ac35-489f-96d1-e864cd328138", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-129600_738d18d0-b72e-4ec3-bb1f-9902b7414073 became leader
	I0513 22:41:41.719709       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-129600_738d18d0-b72e-4ec3-bb1f-9902b7414073!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:44:59.906680    7816 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-129600 -n functional-129600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-129600 -n functional-129600: (10.6567274s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-129600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (29.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-129600 config unset cpus" to be -""- but got *"W0513 22:47:50.643083    1064 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-129600 config get cpus: exit status 14 (212.8704ms)

                                                
                                                
** stderr ** 
	W0513 22:47:50.899997    2448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-129600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0513 22:47:50.899997    2448 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-129600 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0513 22:47:51.085288    9108 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-129600 config get cpus" to be -""- but got *"W0513 22:47:51.315369    6764 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-129600 config unset cpus" to be -""- but got *"W0513 22:47:51.514499    1824 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-129600 config get cpus: exit status 14 (163.333ms)

                                                
                                                
** stderr ** 
	W0513 22:47:51.696738    5672 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-129600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0513 22:47:51.696738    5672 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 service --namespace=default --https --url hello-node
E0513 22:48:32.758589    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-129600 service --namespace=default --https --url hello-node: exit status 1 (15.0412862s)

                                                
                                                
** stderr ** 
	W0513 22:48:32.560391    2448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-129600 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-129600 service hello-node --url --format={{.IP}}: exit status 1 (15.0318959s)

                                                
                                                
** stderr ** 
	W0513 22:48:47.623187    8808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-129600 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-129600 service hello-node --url: exit status 1 (15.0171472s)

                                                
                                                
** stderr ** 
	W0513 22:49:02.660865    4416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-129600 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (63.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-hd72c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-hd72c -- sh -c "ping -c 1 172.23.96.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-hd72c -- sh -c "ping -c 1 172.23.96.1": exit status 1 (10.3869065s)

                                                
                                                
-- stdout --
	PING 172.23.96.1 (172.23.96.1): 56 data bytes
	
	--- 172.23.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:05:39.259436   12456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.23.96.1) from pod (busybox-fc5497c4f-hd72c): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-njj9r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-njj9r -- sh -c "ping -c 1 172.23.96.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-njj9r -- sh -c "ping -c 1 172.23.96.1": exit status 1 (10.3961998s)

                                                
                                                
-- stdout --
	PING 172.23.96.1 (172.23.96.1): 56 data bytes
	
	--- 172.23.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:05:50.062789    7344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.23.96.1) from pod (busybox-fc5497c4f-njj9r): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-v5w28 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-v5w28 -- sh -c "ping -c 1 172.23.96.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-v5w28 -- sh -c "ping -c 1 172.23.96.1": exit status 1 (10.4115825s)

                                                
                                                
-- stdout --
	PING 172.23.96.1 (172.23.96.1): 56 data bytes
	
	--- 172.23.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:06:00.913031    7336 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.23.96.1) from pod (busybox-fc5497c4f-v5w28): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-586300 -n ha-586300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-586300 -n ha-586300: (10.7098336s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 logs -n 25: (7.5946743s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | functional-129600 ssh pgrep          | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:51 UTC |                     |
	|         | buildkitd                            |                   |                   |         |                     |                     |
	| image   | functional-129600 image build -t     | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:51 UTC | 13 May 24 22:51 UTC |
	|         | localhost/my-image:functional-129600 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-129600 image ls           | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:51 UTC | 13 May 24 22:51 UTC |
	| delete  | -p functional-129600                 | functional-129600 | minikube5\jenkins | v1.33.1 | 13 May 24 22:53 UTC | 13 May 24 22:54 UTC |
	| start   | -p ha-586300 --wait=true             | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 22:54 UTC | 13 May 24 23:04 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- apply -f             | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- rollout status       | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- get pods -o          | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- get pods -o          | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-hd72c --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-njj9r --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-v5w28 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-hd72c --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-njj9r --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-v5w28 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-hd72c -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-njj9r -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-v5w28 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- get pods -o          | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-hd72c              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC |                     |
	|         | busybox-fc5497c4f-hd72c -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.96.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC | 13 May 24 23:05 UTC |
	|         | busybox-fc5497c4f-njj9r              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:05 UTC |                     |
	|         | busybox-fc5497c4f-njj9r -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.96.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:06 UTC | 13 May 24 23:06 UTC |
	|         | busybox-fc5497c4f-v5w28              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-586300 -- exec                 | ha-586300         | minikube5\jenkins | v1.33.1 | 13 May 24 23:06 UTC |                     |
	|         | busybox-fc5497c4f-v5w28 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.96.1             |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 22:54:40
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 22:54:40.050723   11992 out.go:291] Setting OutFile to fd 992 ...
	I0513 22:54:40.050723   11992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:54:40.050723   11992 out.go:304] Setting ErrFile to fd 960...
	I0513 22:54:40.051723   11992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:54:40.076723   11992 out.go:298] Setting JSON to false
	I0513 22:54:40.080566   11992 start.go:129] hostinfo: {"hostname":"minikube5","uptime":2443,"bootTime":1715638436,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0513 22:54:40.080685   11992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:54:40.086154   11992 out.go:177] * [ha-586300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:54:40.089904   11992 notify.go:220] Checking for updates...
	I0513 22:54:40.092370   11992 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:54:40.095146   11992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 22:54:40.097865   11992 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0513 22:54:40.100413   11992 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 22:54:40.102617   11992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 22:54:40.106082   11992 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:54:44.836967   11992 out.go:177] * Using the hyperv driver based on user configuration
	I0513 22:54:44.839893   11992 start.go:297] selected driver: hyperv
	I0513 22:54:44.839893   11992 start.go:901] validating driver "hyperv" against <nil>
	I0513 22:54:44.839893   11992 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 22:54:44.882441   11992 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 22:54:44.883559   11992 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 22:54:44.883559   11992 cni.go:84] Creating CNI manager for ""
	I0513 22:54:44.883559   11992 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0513 22:54:44.883559   11992 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0513 22:54:44.883559   11992 start.go:340] cluster config:
	{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0513 22:54:44.884556   11992 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 22:54:44.888031   11992 out.go:177] * Starting "ha-586300" primary control-plane node in "ha-586300" cluster
	I0513 22:54:44.890966   11992 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:54:44.891977   11992 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 22:54:44.891977   11992 cache.go:56] Caching tarball of preloaded images
	I0513 22:54:44.892298   11992 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 22:54:44.892298   11992 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 22:54:44.892948   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 22:54:44.893236   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json: {Name:mk9bf1a8c36fb3c2a6eb432b78e40cc7c3ec6d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:54:44.893448   11992 start.go:360] acquireMachinesLock for ha-586300: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 22:54:44.895393   11992 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-586300"
	I0513 22:54:44.895393   11992 start.go:93] Provisioning new machine with config: &{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 22:54:44.896449   11992 start.go:125] createHost starting for "" (driver="hyperv")
	I0513 22:54:44.902407   11992 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 22:54:44.902845   11992 start.go:159] libmachine.API.Create for "ha-586300" (driver="hyperv")
	I0513 22:54:44.902845   11992 client.go:168] LocalClient.Create starting
	I0513 22:54:44.903041   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0513 22:54:44.903405   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 22:54:44.903457   11992 main.go:141] libmachine: Parsing certificate...
	I0513 22:54:44.903539   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0513 22:54:44.903539   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 22:54:44.903539   11992 main.go:141] libmachine: Parsing certificate...
	I0513 22:54:44.903539   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0513 22:54:46.699025   11992 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0513 22:54:46.699025   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:46.699418   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0513 22:54:48.192453   11992 main.go:141] libmachine: [stdout =====>] : False
	
	I0513 22:54:48.193019   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:48.193081   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 22:54:49.507998   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 22:54:49.507998   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:49.508899   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 22:54:52.660535   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 22:54:52.661290   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:52.662836   11992 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0513 22:54:52.990932   11992 main.go:141] libmachine: Creating SSH key...
	I0513 22:54:53.092530   11992 main.go:141] libmachine: Creating VM...
	I0513 22:54:53.093542   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 22:54:55.554838   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 22:54:55.555680   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:55.555754   11992 main.go:141] libmachine: Using switch "Default Switch"
	I0513 22:54:55.555813   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 22:54:57.056960   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 22:54:57.056960   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:57.057859   11992 main.go:141] libmachine: Creating VHD
	I0513 22:54:57.057859   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0513 22:55:00.538720   11992 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6572D9F0-51A2-4A27-9519-F7574A6B3534
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0513 22:55:00.538955   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:00.538955   11992 main.go:141] libmachine: Writing magic tar header
	I0513 22:55:00.539043   11992 main.go:141] libmachine: Writing SSH key tar header
	I0513 22:55:00.548212   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0513 22:55:03.547140   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:03.547140   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:03.547571   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\disk.vhd' -SizeBytes 20000MB
	I0513 22:55:05.905765   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:05.905976   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:05.905976   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-586300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0513 22:55:09.238782   11992 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-586300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0513 22:55:09.238782   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:09.239208   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-586300 -DynamicMemoryEnabled $false
	I0513 22:55:11.253825   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:11.254432   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:11.254537   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-586300 -Count 2
	I0513 22:55:13.241001   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:13.241001   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:13.241514   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-586300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\boot2docker.iso'
	I0513 22:55:15.561644   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:15.561644   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:15.562243   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-586300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\disk.vhd'
	I0513 22:55:17.914085   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:17.914085   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:17.914085   11992 main.go:141] libmachine: Starting VM...
	I0513 22:55:17.914487   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-586300
	I0513 22:55:20.718303   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:20.718303   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:20.718303   11992 main.go:141] libmachine: Waiting for host to start...
	I0513 22:55:20.718712   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:22.768072   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:22.768072   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:22.768248   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:25.052320   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:25.052320   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:26.064864   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:28.052566   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:28.052566   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:28.053214   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:30.286852   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:30.286852   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:31.290134   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:33.242289   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:33.242289   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:33.242524   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:35.464671   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:35.464836   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:36.465663   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:38.438427   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:38.439331   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:38.439331   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:40.677929   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:40.677929   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:41.687410   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:43.639057   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:43.639057   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:43.639245   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:45.992991   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:55:45.992991   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:45.992991   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:47.879622   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:47.880502   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:47.880775   11992 machine.go:94] provisionDockerMachine start ...
	I0513 22:55:47.880775   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:49.806117   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:49.806117   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:49.806211   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:52.053553   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:55:52.053553   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:52.059341   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:55:52.070438   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:55:52.070499   11992 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 22:55:52.227422   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 22:55:52.227422   11992 buildroot.go:166] provisioning hostname "ha-586300"
	I0513 22:55:52.227422   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:54.095806   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:54.095806   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:54.096172   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:56.367278   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:55:56.367278   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:56.371142   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:55:56.371142   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:55:56.371142   11992 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-586300 && echo "ha-586300" | sudo tee /etc/hostname
	I0513 22:55:56.535204   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-586300
	
	I0513 22:55:56.535282   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:58.424502   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:58.424768   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:58.424871   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:00.665044   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:00.665044   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:00.668672   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:00.668672   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:00.668672   11992 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-586300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-586300/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-586300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 22:56:00.822750   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 22:56:00.822750   11992 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 22:56:00.822946   11992 buildroot.go:174] setting up certificates
	I0513 22:56:00.822946   11992 provision.go:84] configureAuth start
	I0513 22:56:00.823080   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:02.751390   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:02.751390   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:02.751867   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:04.969160   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:04.969160   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:04.970083   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:06.838970   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:06.838970   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:06.839828   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:09.094319   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:09.094647   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:09.094647   11992 provision.go:143] copyHostCerts
	I0513 22:56:09.094809   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0513 22:56:09.095023   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0513 22:56:09.095023   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0513 22:56:09.095124   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 22:56:09.096087   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0513 22:56:09.096215   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0513 22:56:09.096215   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0513 22:56:09.096215   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 22:56:09.097084   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0513 22:56:09.097234   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0513 22:56:09.097310   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0513 22:56:09.097483   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 22:56:09.098017   11992 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-586300 san=[127.0.0.1 172.23.102.229 ha-586300 localhost minikube]
	I0513 22:56:09.326691   11992 provision.go:177] copyRemoteCerts
	I0513 22:56:09.334693   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 22:56:09.334693   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:11.251332   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:11.251332   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:11.251575   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:13.501065   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:13.501295   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:13.501601   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:56:13.611146   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2762835s)
	I0513 22:56:13.611146   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 22:56:13.611774   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0513 22:56:13.656804   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 22:56:13.657415   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 22:56:13.698805   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 22:56:13.699401   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0513 22:56:13.738511   11992 provision.go:87] duration metric: took 12.9150546s to configureAuth
	I0513 22:56:13.738511   11992 buildroot.go:189] setting minikube options for container-runtime
	I0513 22:56:13.739780   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:56:13.739897   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:15.594877   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:15.594877   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:15.594877   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:17.849212   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:17.849212   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:17.853202   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:17.853619   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:17.853619   11992 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 22:56:17.998409   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 22:56:17.998409   11992 buildroot.go:70] root file system type: tmpfs
	I0513 22:56:17.998409   11992 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 22:56:17.998409   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:19.900262   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:19.900262   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:19.900321   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:22.139933   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:22.139933   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:22.143859   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:22.144235   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:22.144314   11992 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 22:56:22.305406   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 22:56:22.305511   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:24.195303   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:24.195303   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:24.196166   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:26.437076   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:26.437076   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:26.440584   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:26.441156   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:26.441156   11992 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 22:56:28.498770   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 22:56:28.498862   11992 machine.go:97] duration metric: took 40.6164827s to provisionDockerMachine
	I0513 22:56:28.498862   11992 client.go:171] duration metric: took 1m43.5919519s to LocalClient.Create
	I0513 22:56:28.498993   11992 start.go:167] duration metric: took 1m43.5920824s to libmachine.API.Create "ha-586300"
	I0513 22:56:28.499057   11992 start.go:293] postStartSetup for "ha-586300" (driver="hyperv")
	I0513 22:56:28.499057   11992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 22:56:28.510991   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 22:56:28.510991   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:30.387242   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:30.388004   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:30.388004   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:32.608187   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:32.608187   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:32.608547   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:56:32.720695   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2095372s)
	I0513 22:56:32.729981   11992 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 22:56:32.737165   11992 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 22:56:32.737165   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 22:56:32.737553   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 22:56:32.738251   11992 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0513 22:56:32.738323   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0513 22:56:32.746908   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 22:56:32.761659   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0513 22:56:32.806599   11992 start.go:296] duration metric: took 4.3073718s for postStartSetup
	I0513 22:56:32.808360   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:34.657120   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:34.657120   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:34.657196   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:36.867345   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:36.867345   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:36.867872   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 22:56:36.870218   11992 start.go:128] duration metric: took 1m51.9693716s to createHost
	I0513 22:56:36.870218   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:38.745897   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:38.745897   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:38.745897   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:40.988108   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:40.989066   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:40.992686   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:40.993342   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:40.993342   11992 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 22:56:41.127579   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715641001.293069690
	
	I0513 22:56:41.127579   11992 fix.go:216] guest clock: 1715641001.293069690
	I0513 22:56:41.127579   11992 fix.go:229] Guest: 2024-05-13 22:56:41.29306969 +0000 UTC Remote: 2024-05-13 22:56:36.8702184 +0000 UTC m=+116.953513901 (delta=4.42285129s)
	I0513 22:56:41.128281   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:42.979603   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:42.979603   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:42.979682   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:45.204612   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:45.204612   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:45.210403   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:45.210473   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:45.210473   11992 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715641001
	I0513 22:56:45.356514   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 22:56:41 UTC 2024
	
	I0513 22:56:45.356596   11992 fix.go:236] clock set: Mon May 13 22:56:41 UTC 2024
	 (err=<nil>)
	I0513 22:56:45.356596   11992 start.go:83] releasing machines lock for "ha-586300", held for 2m0.4564688s
	I0513 22:56:45.356821   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:47.200079   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:47.200497   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:47.200497   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:49.496322   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:49.496322   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:49.501115   11992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 22:56:49.501225   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:49.510577   11992 ssh_runner.go:195] Run: cat /version.json
	I0513 22:56:49.510577   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:51.453113   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:51.453113   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:51.453220   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:51.453753   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:51.453753   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:51.453938   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:53.830207   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:53.830207   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:53.831385   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:56:53.849606   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:53.850623   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:53.850965   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:56:53.930608   11992 ssh_runner.go:235] Completed: cat /version.json: (4.4198556s)
	I0513 22:56:53.942212   11992 ssh_runner.go:195] Run: systemctl --version
	I0513 22:56:54.009421   11992 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5081271s)
	I0513 22:56:54.021893   11992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0513 22:56:54.031235   11992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 22:56:54.044192   11992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 22:56:54.068622   11992 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 22:56:54.068622   11992 start.go:494] detecting cgroup driver to use...
	I0513 22:56:54.069227   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 22:56:54.107429   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 22:56:54.132848   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 22:56:54.150027   11992 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 22:56:54.157905   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 22:56:54.189117   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 22:56:54.217414   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 22:56:54.246352   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 22:56:54.271134   11992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 22:56:54.296469   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 22:56:54.320335   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 22:56:54.354896   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 22:56:54.386361   11992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 22:56:54.415190   11992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 22:56:54.444589   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:56:54.612531   11992 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 22:56:54.637389   11992 start.go:494] detecting cgroup driver to use...
	I0513 22:56:54.647732   11992 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 22:56:54.676868   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 22:56:54.708618   11992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 22:56:54.743790   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 22:56:54.772289   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 22:56:54.804394   11992 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 22:56:54.863192   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 22:56:54.883694   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 22:56:54.925677   11992 ssh_runner.go:195] Run: which cri-dockerd
	I0513 22:56:54.941330   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 22:56:54.961803   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 22:56:55.000622   11992 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 22:56:55.185909   11992 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 22:56:55.353725   11992 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 22:56:55.354052   11992 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 22:56:55.397672   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:56:55.560960   11992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 22:56:58.037701   11992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4766421s)
	I0513 22:56:58.048830   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 22:56:58.082368   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 22:56:58.114531   11992 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 22:56:58.288297   11992 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 22:56:58.450940   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:56:58.636351   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 22:56:58.675165   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 22:56:58.705482   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:56:58.877626   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 22:56:58.966407   11992 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 22:56:58.975566   11992 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 22:56:58.989297   11992 start.go:562] Will wait 60s for crictl version
	I0513 22:56:58.999445   11992 ssh_runner.go:195] Run: which crictl
	I0513 22:56:59.018483   11992 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 22:56:59.064682   11992 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 22:56:59.074508   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 22:56:59.109388   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 22:56:59.140814   11992 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 22:56:59.140923   11992 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 22:56:59.144541   11992 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 22:56:59.144541   11992 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 22:56:59.145085   11992 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 22:56:59.145085   11992 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 22:56:59.148127   11992 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 22:56:59.148164   11992 ip.go:210] interface addr: 172.23.96.1/20
	I0513 22:56:59.157742   11992 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 22:56:59.163114   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 22:56:59.193916   11992 kubeadm.go:877] updating cluster {Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP
:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0513 22:56:59.194788   11992 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:56:59.199991   11992 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 22:56:59.217670   11992 docker.go:685] Got preloaded images: 
	I0513 22:56:59.217670   11992 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0513 22:56:59.225275   11992 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 22:56:59.250914   11992 ssh_runner.go:195] Run: which lz4
	I0513 22:56:59.256698   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0513 22:56:59.264911   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0513 22:56:59.271343   11992 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0513 22:56:59.271482   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0513 22:57:00.692144   11992 docker.go:649] duration metric: took 1.4348105s to copy over tarball
	I0513 22:57:00.699966   11992 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0513 22:57:09.890973   11992 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.1905603s)
	I0513 22:57:09.891134   11992 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0513 22:57:09.949474   11992 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 22:57:09.968940   11992 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0513 22:57:10.010043   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:57:10.195908   11992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 22:57:13.502028   11992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3059892s)
	I0513 22:57:13.507396   11992 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 22:57:13.528626   11992 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 22:57:13.528626   11992 cache_images.go:84] Images are preloaded, skipping loading
	I0513 22:57:13.528626   11992 kubeadm.go:928] updating node { 172.23.102.229 8443 v1.30.0 docker true true} ...
	I0513 22:57:13.528626   11992 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-586300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.102.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 22:57:13.538878   11992 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0513 22:57:13.568100   11992 cni.go:84] Creating CNI manager for ""
	I0513 22:57:13.568817   11992 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0513 22:57:13.568863   11992 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0513 22:57:13.568863   11992 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.102.229 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-586300 NodeName:ha-586300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.102.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.102.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0513 22:57:13.568863   11992 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.102.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-586300"
	  kubeletExtraArgs:
	    node-ip: 172.23.102.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.102.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0513 22:57:13.568863   11992 kube-vip.go:115] generating kube-vip config ...
	I0513 22:57:13.577147   11992 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0513 22:57:13.601883   11992 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0513 22:57:13.602736   11992 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0513 22:57:13.611440   11992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 22:57:13.631631   11992 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 22:57:13.641348   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0513 22:57:13.659492   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0513 22:57:13.685374   11992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 22:57:13.717509   11992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0513 22:57:13.746933   11992 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0513 22:57:13.787353   11992 ssh_runner.go:195] Run: grep 172.23.111.254	control-plane.minikube.internal$ /etc/hosts
	I0513 22:57:13.793176   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 22:57:13.826003   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:57:14.001152   11992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 22:57:14.026204   11992 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300 for IP: 172.23.102.229
	I0513 22:57:14.026204   11992 certs.go:194] generating shared ca certs ...
	I0513 22:57:14.026204   11992 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.027073   11992 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 22:57:14.027237   11992 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 22:57:14.027237   11992 certs.go:256] generating profile certs ...
	I0513 22:57:14.028011   11992 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key
	I0513 22:57:14.028093   11992 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.crt with IP's: []
	I0513 22:57:14.336335   11992 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.crt ...
	I0513 22:57:14.336335   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.crt: {Name:mk9dc4b347341b7a60c4c1778c5c41fc236f656a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.337644   11992 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key ...
	I0513 22:57:14.338198   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key: {Name:mk1658713091c08ebf368e2a1623cd79fe676f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.339054   11992 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.b1c9a291
	I0513 22:57:14.339054   11992 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.b1c9a291 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.102.229 172.23.111.254]
	I0513 22:57:14.600696   11992 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.b1c9a291 ...
	I0513 22:57:14.600696   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.b1c9a291: {Name:mk88087cc6424098a5e4267c0610ce040ed6c02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.602385   11992 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.b1c9a291 ...
	I0513 22:57:14.602385   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.b1c9a291: {Name:mk1ec9ce49999003f9d1727e5d9543b53d6d4347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.602850   11992 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.b1c9a291 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt
	I0513 22:57:14.614560   11992 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.b1c9a291 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key
	I0513 22:57:14.615549   11992 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key
	I0513 22:57:14.615549   11992 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt with IP's: []
	I0513 22:57:14.756371   11992 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt ...
	I0513 22:57:14.756371   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt: {Name:mk5e1baa9e5c947c5c2eea90c3d72bdb4ccffcb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.756747   11992 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key ...
	I0513 22:57:14.756747   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key: {Name:mk0f18f00f0a5dbad7013c9d316f7da4b9af2090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.757777   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 22:57:14.758803   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 22:57:14.758963   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 22:57:14.759083   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 22:57:14.759202   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0513 22:57:14.759254   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0513 22:57:14.759444   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0513 22:57:14.769688   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0513 22:57:14.771052   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0513 22:57:14.771211   11992 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0513 22:57:14.771211   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 22:57:14.771460   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 22:57:14.771648   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 22:57:14.771805   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 22:57:14.771964   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0513 22:57:14.771964   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:57:14.772440   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0513 22:57:14.772440   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0513 22:57:14.772786   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 22:57:14.819679   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 22:57:14.855282   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 22:57:14.893580   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 22:57:14.934992   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0513 22:57:14.978256   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 22:57:15.020134   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 22:57:15.060668   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0513 22:57:15.110083   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 22:57:15.151396   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0513 22:57:15.202328   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0513 22:57:15.248114   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0513 22:57:15.286786   11992 ssh_runner.go:195] Run: openssl version
	I0513 22:57:15.306764   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 22:57:15.337318   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:57:15.348218   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:57:15.359077   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:57:15.377093   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 22:57:15.405422   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0513 22:57:15.428760   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0513 22:57:15.436046   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 22:57:15.446148   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0513 22:57:15.462308   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0513 22:57:15.489596   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0513 22:57:15.518239   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0513 22:57:15.525660   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 22:57:15.536691   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0513 22:57:15.553797   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 22:57:15.577229   11992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 22:57:15.585803   11992 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 22:57:15.585803   11992 kubeadm.go:391] StartCluster: {Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:17
2.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:57:15.592377   11992 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 22:57:15.619384   11992 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0513 22:57:15.644222   11992 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 22:57:15.671198   11992 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 22:57:15.685583   11992 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 22:57:15.685583   11992 kubeadm.go:156] found existing configuration files:
	
	I0513 22:57:15.694426   11992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0513 22:57:15.709580   11992 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 22:57:15.721572   11992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 22:57:15.744066   11992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0513 22:57:15.760277   11992 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 22:57:15.772496   11992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 22:57:15.797916   11992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0513 22:57:15.812649   11992 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 22:57:15.823721   11992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 22:57:15.849528   11992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0513 22:57:15.864212   11992 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 22:57:15.875186   11992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 22:57:15.890922   11992 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0513 22:57:16.245420   11992 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0513 22:57:28.912636   11992 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0513 22:57:28.912912   11992 kubeadm.go:309] [preflight] Running pre-flight checks
	I0513 22:57:28.913304   11992 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0513 22:57:28.913522   11992 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0513 22:57:28.914059   11992 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0513 22:57:28.914231   11992 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0513 22:57:28.917716   11992 out.go:204]   - Generating certificates and keys ...
	I0513 22:57:28.917901   11992 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0513 22:57:28.917901   11992 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0513 22:57:28.917901   11992 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0513 22:57:28.918460   11992 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0513 22:57:28.918714   11992 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0513 22:57:28.918827   11992 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0513 22:57:28.918827   11992 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0513 22:57:28.919172   11992 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-586300 localhost] and IPs [172.23.102.229 127.0.0.1 ::1]
	I0513 22:57:28.919172   11992 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0513 22:57:28.919508   11992 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-586300 localhost] and IPs [172.23.102.229 127.0.0.1 ::1]
	I0513 22:57:28.919508   11992 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0513 22:57:28.919508   11992 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0513 22:57:28.919508   11992 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0513 22:57:28.920057   11992 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0513 22:57:28.920057   11992 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0513 22:57:28.920269   11992 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0513 22:57:28.920269   11992 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0513 22:57:28.920269   11992 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0513 22:57:28.920269   11992 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0513 22:57:28.920818   11992 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0513 22:57:28.920977   11992 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0513 22:57:28.923966   11992 out.go:204]   - Booting up control plane ...
	I0513 22:57:28.924629   11992 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0513 22:57:28.924629   11992 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0513 22:57:28.924629   11992 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0513 22:57:28.925231   11992 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 22:57:28.925474   11992 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 22:57:28.925474   11992 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0513 22:57:28.925712   11992 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0513 22:57:28.925712   11992 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0513 22:57:28.925712   11992 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002877081s
	I0513 22:57:28.926245   11992 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0513 22:57:28.926297   11992 kubeadm.go:309] [api-check] The API server is healthy after 7.003004483s
	I0513 22:57:28.926297   11992 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0513 22:57:28.926297   11992 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0513 22:57:28.926917   11992 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0513 22:57:28.927156   11992 kubeadm.go:309] [mark-control-plane] Marking the node ha-586300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0513 22:57:28.927432   11992 kubeadm.go:309] [bootstrap-token] Using token: ynj82i.n6eonv2vordb1vfy
	I0513 22:57:28.930010   11992 out.go:204]   - Configuring RBAC rules ...
	I0513 22:57:28.931149   11992 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0513 22:57:28.931149   11992 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0513 22:57:28.931673   11992 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0513 22:57:28.931702   11992 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0513 22:57:28.931702   11992 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0513 22:57:28.932258   11992 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0513 22:57:28.932423   11992 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0513 22:57:28.932624   11992 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0513 22:57:28.932624   11992 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0513 22:57:28.932624   11992 kubeadm.go:309] 
	I0513 22:57:28.932624   11992 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0513 22:57:28.932624   11992 kubeadm.go:309] 
	I0513 22:57:28.932624   11992 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0513 22:57:28.932624   11992 kubeadm.go:309] 
	I0513 22:57:28.932624   11992 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0513 22:57:28.933257   11992 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0513 22:57:28.933357   11992 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0513 22:57:28.933357   11992 kubeadm.go:309] 
	I0513 22:57:28.933357   11992 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0513 22:57:28.933357   11992 kubeadm.go:309] 
	I0513 22:57:28.933357   11992 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0513 22:57:28.933357   11992 kubeadm.go:309] 
	I0513 22:57:28.933357   11992 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0513 22:57:28.933906   11992 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0513 22:57:28.933959   11992 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0513 22:57:28.933959   11992 kubeadm.go:309] 
	I0513 22:57:28.933959   11992 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0513 22:57:28.933959   11992 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0513 22:57:28.933959   11992 kubeadm.go:309] 
	I0513 22:57:28.934623   11992 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ynj82i.n6eonv2vordb1vfy \
	I0513 22:57:28.934836   11992 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 \
	I0513 22:57:28.934878   11992 kubeadm.go:309] 	--control-plane 
	I0513 22:57:28.934960   11992 kubeadm.go:309] 
	I0513 22:57:28.934960   11992 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0513 22:57:28.935172   11992 kubeadm.go:309] 
	I0513 22:57:28.935172   11992 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ynj82i.n6eonv2vordb1vfy \
	I0513 22:57:28.935919   11992 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0513 22:57:28.936717   11992 cni.go:84] Creating CNI manager for ""
	I0513 22:57:28.936717   11992 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0513 22:57:28.942702   11992 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0513 22:57:28.953617   11992 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0513 22:57:28.961223   11992 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0513 22:57:28.961272   11992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0513 22:57:29.001252   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0513 22:57:29.488035   11992 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0513 22:57:29.501285   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-586300 minikube.k8s.io/updated_at=2024_05_13T22_57_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=ha-586300 minikube.k8s.io/primary=true
	I0513 22:57:29.503506   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:29.541845   11992 ops.go:34] apiserver oom_adj: -16
	I0513 22:57:29.721723   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:30.231002   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:30.740602   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:31.238191   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:31.743211   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:32.225571   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:32.729560   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:33.227963   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:33.726231   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:34.231133   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:34.729556   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:35.228582   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:35.734606   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:36.233364   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:36.738564   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:37.227234   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:37.732458   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:38.229612   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:38.728531   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:39.230002   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:39.731046   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:40.231116   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:40.726192   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:41.236256   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:41.740854   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:42.228034   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:42.381255   11992 kubeadm.go:1107] duration metric: took 12.8927049s to wait for elevateKubeSystemPrivileges
	W0513 22:57:42.381740   11992 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0513 22:57:42.381740   11992 kubeadm.go:393] duration metric: took 26.7948701s to StartCluster
	I0513 22:57:42.381830   11992 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:42.381943   11992 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:57:42.384248   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:42.386083   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0513 22:57:42.386330   11992 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 22:57:42.386641   11992 start.go:240] waiting for startup goroutines ...
	I0513 22:57:42.386330   11992 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 22:57:42.386691   11992 addons.go:69] Setting default-storageclass=true in profile "ha-586300"
	I0513 22:57:42.386691   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:57:42.386691   11992 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-586300"
	I0513 22:57:42.386691   11992 addons.go:69] Setting storage-provisioner=true in profile "ha-586300"
	I0513 22:57:42.386691   11992 addons.go:234] Setting addon storage-provisioner=true in "ha-586300"
	I0513 22:57:42.387293   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 22:57:42.387623   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:57:42.388370   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:57:42.561504   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0513 22:57:42.913111   11992 start.go:946] {"host.minikube.internal": 172.23.96.1} host record injected into CoreDNS's ConfigMap
	I0513 22:57:44.473993   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:57:44.473993   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:44.478219   11992 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 22:57:44.481630   11992 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 22:57:44.481630   11992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 22:57:44.481801   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:57:44.493890   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:57:44.493966   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:44.494606   11992 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:57:44.494606   11992 kapi.go:59] client config for ha-586300: &rest.Config{Host:"https://172.23.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 22:57:44.496316   11992 cert_rotation.go:137] Starting client certificate rotation controller
	I0513 22:57:44.496580   11992 addons.go:234] Setting addon default-storageclass=true in "ha-586300"
	I0513 22:57:44.496580   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 22:57:44.497439   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:57:46.544655   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:57:46.544655   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:46.544879   11992 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 22:57:46.544879   11992 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 22:57:46.544879   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:57:46.545914   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:57:46.545914   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:46.545974   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:57:48.598859   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:57:48.598859   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:48.599115   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:57:48.976593   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:57:48.976593   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:48.976593   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:57:49.114228   11992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 22:57:50.961504   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:57:50.961504   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:50.962143   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:57:51.095010   11992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 22:57:51.250573   11992 round_trippers.go:463] GET https://172.23.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0513 22:57:51.250573   11992 round_trippers.go:469] Request Headers:
	I0513 22:57:51.250573   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:57:51.250573   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:57:51.264154   11992 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0513 22:57:51.264764   11992 round_trippers.go:463] PUT https://172.23.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0513 22:57:51.264764   11992 round_trippers.go:469] Request Headers:
	I0513 22:57:51.264764   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:57:51.264764   11992 round_trippers.go:473]     Content-Type: application/json
	I0513 22:57:51.264764   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:57:51.267943   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 22:57:51.272284   11992 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0513 22:57:51.275496   11992 addons.go:505] duration metric: took 8.8889015s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0513 22:57:51.275496   11992 start.go:245] waiting for cluster config update ...
	I0513 22:57:51.275496   11992 start.go:254] writing updated cluster config ...
	I0513 22:57:51.278672   11992 out.go:177] 
	I0513 22:57:51.290807   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:57:51.290807   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 22:57:51.295960   11992 out.go:177] * Starting "ha-586300-m02" control-plane node in "ha-586300" cluster
	I0513 22:57:51.298225   11992 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:57:51.298632   11992 cache.go:56] Caching tarball of preloaded images
	I0513 22:57:51.298632   11992 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 22:57:51.298632   11992 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 22:57:51.299361   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 22:57:51.304796   11992 start.go:360] acquireMachinesLock for ha-586300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 22:57:51.304796   11992 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-586300-m02"
	I0513 22:57:51.304796   11992 start.go:93] Provisioning new machine with config: &{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:def
ault APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 22:57:51.304796   11992 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0513 22:57:51.310112   11992 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 22:57:51.310112   11992 start.go:159] libmachine.API.Create for "ha-586300" (driver="hyperv")
	I0513 22:57:51.310112   11992 client.go:168] LocalClient.Create starting
	I0513 22:57:51.310650   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0513 22:57:51.310773   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 22:57:51.310773   11992 main.go:141] libmachine: Parsing certificate...
	I0513 22:57:51.310773   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0513 22:57:51.310773   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 22:57:51.310773   11992 main.go:141] libmachine: Parsing certificate...
	I0513 22:57:51.310773   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0513 22:57:52.957696   11992 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0513 22:57:52.957696   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:52.958200   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0513 22:57:54.540990   11992 main.go:141] libmachine: [stdout =====>] : False
	
	I0513 22:57:54.540990   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:54.540990   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 22:57:55.937657   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 22:57:55.937657   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:55.937657   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 22:57:59.127033   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 22:57:59.127713   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:59.129585   11992 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0513 22:57:59.481868   11992 main.go:141] libmachine: Creating SSH key...
	I0513 22:57:59.666272   11992 main.go:141] libmachine: Creating VM...
	I0513 22:57:59.666272   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 22:58:02.254045   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 22:58:02.254045   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:02.254045   11992 main.go:141] libmachine: Using switch "Default Switch"
	I0513 22:58:02.254045   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 22:58:03.856076   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 22:58:03.856595   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:03.856595   11992 main.go:141] libmachine: Creating VHD
	I0513 22:58:03.856668   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0513 22:58:07.328003   11992 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7ED062D4-E020-43AF-A7EC-0E9D8E0256F5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0513 22:58:07.328003   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:07.328078   11992 main.go:141] libmachine: Writing magic tar header
	I0513 22:58:07.328078   11992 main.go:141] libmachine: Writing SSH key tar header
	I0513 22:58:07.336465   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0513 22:58:10.263301   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:10.264335   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:10.264380   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\disk.vhd' -SizeBytes 20000MB
	I0513 22:58:12.596156   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:12.596221   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:12.596221   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-586300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0513 22:58:15.803398   11992 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-586300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0513 22:58:15.803662   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:15.803662   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-586300-m02 -DynamicMemoryEnabled $false
	I0513 22:58:17.789394   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:17.790107   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:17.790107   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-586300-m02 -Count 2
	I0513 22:58:19.723214   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:19.723214   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:19.723598   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-586300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\boot2docker.iso'
	I0513 22:58:22.033949   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:22.033949   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:22.033949   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-586300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\disk.vhd'
	I0513 22:58:24.391458   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:24.391458   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:24.391458   11992 main.go:141] libmachine: Starting VM...
	I0513 22:58:24.391527   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-586300-m02
	I0513 22:58:27.153185   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:27.153185   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:27.153185   11992 main.go:141] libmachine: Waiting for host to start...
	I0513 22:58:27.153185   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:29.161623   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:29.162065   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:29.162086   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:31.357356   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:31.357356   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:32.366040   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:34.319749   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:34.319749   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:34.319749   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:36.568076   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:36.568348   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:37.568577   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:39.547280   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:39.547340   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:39.547545   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:41.835281   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:41.835528   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:42.839818   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:44.808909   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:44.808909   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:44.809007   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:47.031235   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:47.031235   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:48.046089   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:49.971223   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:49.971223   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:49.971901   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:52.319200   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:58:52.319200   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:52.319669   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:54.219331   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:54.219784   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:54.219784   11992 machine.go:94] provisionDockerMachine start ...
	I0513 22:58:54.219868   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:56.133003   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:56.133286   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:56.133399   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:58.370942   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:58:58.370994   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:58.373966   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:58:58.385180   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:58:58.385180   11992 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 22:58:58.512374   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 22:58:58.512374   11992 buildroot.go:166] provisioning hostname "ha-586300-m02"
	I0513 22:58:58.512374   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:00.380480   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:00.380480   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:00.380480   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:02.622182   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:02.623005   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:02.628418   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:02.628418   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:02.628418   11992 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-586300-m02 && echo "ha-586300-m02" | sudo tee /etc/hostname
	I0513 22:59:02.791744   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-586300-m02
	
	I0513 22:59:02.791744   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:04.714403   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:04.714403   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:04.714604   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:06.977055   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:06.977435   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:06.981289   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:06.981462   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:06.981462   11992 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-586300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-586300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-586300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 22:59:07.110619   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 22:59:07.110619   11992 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 22:59:07.110619   11992 buildroot.go:174] setting up certificates
	I0513 22:59:07.110619   11992 provision.go:84] configureAuth start
	I0513 22:59:07.110619   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:09.021585   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:09.021585   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:09.022521   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:11.276521   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:11.276877   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:11.276877   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:13.173312   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:13.174339   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:13.174415   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:15.439712   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:15.440140   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:15.440140   11992 provision.go:143] copyHostCerts
	I0513 22:59:15.440272   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0513 22:59:15.440272   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0513 22:59:15.440272   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0513 22:59:15.440272   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 22:59:15.441749   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0513 22:59:15.441899   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0513 22:59:15.441972   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0513 22:59:15.442207   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 22:59:15.442913   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0513 22:59:15.443083   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0513 22:59:15.443167   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0513 22:59:15.443403   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 22:59:15.444175   11992 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-586300-m02 san=[127.0.0.1 172.23.108.68 ha-586300-m02 localhost minikube]
	I0513 22:59:15.589413   11992 provision.go:177] copyRemoteCerts
	I0513 22:59:15.598046   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 22:59:15.598046   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:17.541658   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:17.541658   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:17.541932   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:19.810964   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:19.810964   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:19.811333   11992 sshutil.go:53] new ssh client: &{IP:172.23.108.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 22:59:19.905707   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3074896s)
	I0513 22:59:19.905778   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 22:59:19.905778   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 22:59:19.956937   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 22:59:19.956937   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0513 22:59:19.998480   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 22:59:19.998942   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 22:59:20.048199   11992 provision.go:87] duration metric: took 12.9370024s to configureAuth
	I0513 22:59:20.048254   11992 buildroot.go:189] setting minikube options for container-runtime
	I0513 22:59:20.049043   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:59:20.049186   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:21.952491   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:21.953462   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:21.953462   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:24.184676   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:24.184676   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:24.189694   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:24.189694   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:24.189694   11992 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 22:59:24.313297   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 22:59:24.313297   11992 buildroot.go:70] root file system type: tmpfs
	I0513 22:59:24.313297   11992 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 22:59:24.313832   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:26.225193   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:26.225695   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:26.225746   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:28.488208   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:28.488208   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:28.491697   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:28.492302   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:28.492302   11992 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.102.229"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 22:59:28.640364   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.102.229
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 22:59:28.640364   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:30.570651   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:30.570802   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:30.570802   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:32.822577   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:32.822577   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:32.826772   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:32.826772   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:32.826772   11992 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 22:59:34.888133   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 22:59:34.888133   11992 machine.go:97] duration metric: took 40.6667265s to provisionDockerMachine
	I0513 22:59:34.888133   11992 client.go:171] duration metric: took 1m43.5738887s to LocalClient.Create
	I0513 22:59:34.888133   11992 start.go:167] duration metric: took 1m43.5738887s to libmachine.API.Create "ha-586300"
	I0513 22:59:34.888133   11992 start.go:293] postStartSetup for "ha-586300-m02" (driver="hyperv")
	I0513 22:59:34.888133   11992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 22:59:34.896119   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 22:59:34.896119   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:36.764835   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:36.764835   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:36.764912   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:39.020300   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:39.020361   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:39.020418   11992 sshutil.go:53] new ssh client: &{IP:172.23.108.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 22:59:39.125680   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2293532s)
	I0513 22:59:39.133713   11992 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 22:59:39.140929   11992 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 22:59:39.140929   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 22:59:39.140929   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 22:59:39.141974   11992 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0513 22:59:39.141974   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0513 22:59:39.150052   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 22:59:39.165785   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0513 22:59:39.210255   11992 start.go:296] duration metric: took 4.3219487s for postStartSetup
	I0513 22:59:39.212337   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:41.100845   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:41.101688   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:41.101782   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:43.400172   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:43.400172   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:43.400644   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 22:59:43.401847   11992 start.go:128] duration metric: took 1m52.0925773s to createHost
	I0513 22:59:43.402376   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:45.270655   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:45.271313   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:45.271313   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:47.548125   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:47.548125   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:47.552502   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:47.552756   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:47.552756   11992 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 22:59:47.679763   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715641187.839715177
	
	I0513 22:59:47.679866   11992 fix.go:216] guest clock: 1715641187.839715177
	I0513 22:59:47.679866   11992 fix.go:229] Guest: 2024-05-13 22:59:47.839715177 +0000 UTC Remote: 2024-05-13 22:59:43.4018473 +0000 UTC m=+303.477709501 (delta=4.437867877s)
	I0513 22:59:47.679866   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:49.530810   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:49.530810   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:49.530810   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:51.763607   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:51.764158   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:51.767068   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:51.767643   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:51.767643   11992 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715641187
	I0513 22:59:51.906004   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 22:59:47 UTC 2024
	
	I0513 22:59:51.906004   11992 fix.go:236] clock set: Mon May 13 22:59:47 UTC 2024
	 (err=<nil>)
	I0513 22:59:51.906004   11992 start.go:83] releasing machines lock for "ha-586300-m02", held for 2m0.5963942s
	I0513 22:59:51.906622   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:53.780156   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:53.780156   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:53.780156   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:56.017431   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:56.018024   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:56.022985   11992 out.go:177] * Found network options:
	I0513 22:59:56.025146   11992 out.go:177]   - NO_PROXY=172.23.102.229
	W0513 22:59:56.027581   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	I0513 22:59:56.028961   11992 out.go:177]   - NO_PROXY=172.23.102.229
	W0513 22:59:56.031851   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 22:59:56.033025   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	I0513 22:59:56.034881   11992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 22:59:56.034881   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:56.041879   11992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0513 22:59:56.041879   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:57.987643   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:57.987643   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:57.987643   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:57.988248   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:57.988248   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:57.988248   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:00:00.322394   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 23:00:00.322394   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:00.322394   11992 sshutil.go:53] new ssh client: &{IP:172.23.108.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 23:00:00.345380   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 23:00:00.345380   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:00.345380   11992 sshutil.go:53] new ssh client: &{IP:172.23.108.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 23:00:00.422818   11992 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3807633s)
	W0513 23:00:00.422818   11992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 23:00:00.433044   11992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 23:00:00.644064   11992 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 23:00:00.644064   11992 start.go:494] detecting cgroup driver to use...
	I0513 23:00:00.644064   11992 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6089981s)
	I0513 23:00:00.644064   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:00:00.685759   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 23:00:00.712915   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 23:00:00.731935   11992 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 23:00:00.741228   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 23:00:00.767732   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:00:00.793189   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 23:00:00.820647   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:00:00.849444   11992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 23:00:00.879897   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 23:00:00.906252   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 23:00:00.933363   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 23:00:00.958899   11992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 23:00:00.984035   11992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 23:00:01.008999   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:00:01.201889   11992 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 23:00:01.234211   11992 start.go:494] detecting cgroup driver to use...
	I0513 23:00:01.244060   11992 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 23:00:01.276310   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:00:01.309849   11992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 23:00:01.357918   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:00:01.394200   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:00:01.425951   11992 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 23:00:01.493157   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:00:01.518492   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:00:01.564643   11992 ssh_runner.go:195] Run: which cri-dockerd
	I0513 23:00:01.581079   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 23:00:01.599043   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 23:00:01.638891   11992 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 23:00:01.846689   11992 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 23:00:02.019200   11992 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 23:00:02.019200   11992 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 23:00:02.064212   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:00:02.254716   11992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 23:00:04.767547   11992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5127308s)
	I0513 23:00:04.776622   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 23:00:04.808242   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:00:04.845442   11992 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 23:00:05.029236   11992 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 23:00:05.212292   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:00:05.387410   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 23:00:05.425971   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:00:05.458973   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:00:05.651539   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 23:00:05.753117   11992 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 23:00:05.761858   11992 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 23:00:05.770723   11992 start.go:562] Will wait 60s for crictl version
	I0513 23:00:05.782718   11992 ssh_runner.go:195] Run: which crictl
	I0513 23:00:05.797063   11992 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 23:00:05.852053   11992 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 23:00:05.861262   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:00:05.896587   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:00:05.928336   11992 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 23:00:05.930917   11992 out.go:177]   - env NO_PROXY=172.23.102.229
	I0513 23:00:05.933280   11992 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 23:00:05.936777   11992 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 23:00:05.936777   11992 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 23:00:05.936777   11992 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 23:00:05.936777   11992 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 23:00:05.939718   11992 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 23:00:05.939718   11992 ip.go:210] interface addr: 172.23.96.1/20
	I0513 23:00:05.947539   11992 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 23:00:05.953636   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:00:05.974279   11992 mustload.go:65] Loading cluster: ha-586300
	I0513 23:00:05.974772   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:00:05.974772   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:00:07.984072   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:00:07.984072   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:07.984072   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:00:07.984784   11992 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300 for IP: 172.23.108.68
	I0513 23:00:07.984784   11992 certs.go:194] generating shared ca certs ...
	I0513 23:00:07.984784   11992 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:00:07.985484   11992 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 23:00:07.985484   11992 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 23:00:07.985484   11992 certs.go:256] generating profile certs ...
	I0513 23:00:07.986386   11992 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key
	I0513 23:00:07.986561   11992 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.6bf21e4f
	I0513 23:00:07.986588   11992 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.6bf21e4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.102.229 172.23.108.68 172.23.111.254]
	I0513 23:00:08.079753   11992 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.6bf21e4f ...
	I0513 23:00:08.079753   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.6bf21e4f: {Name:mk3b4d314abff0859b142f769105005e7fbc5a7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:00:08.080760   11992 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.6bf21e4f ...
	I0513 23:00:08.080760   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.6bf21e4f: {Name:mk35b31305d5e6a9cf5203f7fcdff538d0954aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:00:08.081811   11992 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.6bf21e4f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt
	I0513 23:00:08.091615   11992 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.6bf21e4f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key
	I0513 23:00:08.093334   11992 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key
	I0513 23:00:08.093334   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 23:00:08.094342   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 23:00:08.094503   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 23:00:08.094569   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 23:00:08.094702   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0513 23:00:08.094765   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0513 23:00:08.095345   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0513 23:00:08.095418   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0513 23:00:08.095799   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0513 23:00:08.095995   11992 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0513 23:00:08.096091   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 23:00:08.096303   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 23:00:08.096498   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 23:00:08.096647   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 23:00:08.096967   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0513 23:00:08.097143   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:00:08.097268   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0513 23:00:08.097325   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0513 23:00:08.097508   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:00:10.108235   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:00:10.108235   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:10.108346   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:00:12.468157   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:00:12.468157   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:12.468157   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 23:00:12.570375   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0513 23:00:12.579310   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0513 23:00:12.615150   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0513 23:00:12.626659   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0513 23:00:12.657992   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0513 23:00:12.663601   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0513 23:00:12.691635   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0513 23:00:12.698400   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0513 23:00:12.732891   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0513 23:00:12.742921   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0513 23:00:12.772962   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0513 23:00:12.784195   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0513 23:00:12.806361   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 23:00:12.852255   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 23:00:12.895318   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 23:00:12.937376   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 23:00:12.980226   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0513 23:00:13.022601   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 23:00:13.065569   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 23:00:13.111967   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0513 23:00:13.156293   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 23:00:13.196200   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0513 23:00:13.240832   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0513 23:00:13.288519   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0513 23:00:13.317884   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0513 23:00:13.346970   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0513 23:00:13.375433   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0513 23:00:13.406787   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0513 23:00:13.436213   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0513 23:00:13.466442   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0513 23:00:13.508092   11992 ssh_runner.go:195] Run: openssl version
	I0513 23:00:13.525525   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 23:00:13.553663   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:00:13.560942   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:00:13.569676   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:00:13.586050   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 23:00:13.613066   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0513 23:00:13.642754   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0513 23:00:13.649805   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 23:00:13.660330   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0513 23:00:13.680305   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0513 23:00:13.712275   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0513 23:00:13.741417   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0513 23:00:13.748831   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 23:00:13.756000   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0513 23:00:13.773591   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 23:00:13.803117   11992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 23:00:13.809118   11992 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 23:00:13.809118   11992 kubeadm.go:928] updating node {m02 172.23.108.68 8443 v1.30.0 docker true true} ...
	I0513 23:00:13.810131   11992 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-586300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.108.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 23:00:13.810131   11992 kube-vip.go:115] generating kube-vip config ...
	I0513 23:00:13.818253   11992 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0513 23:00:13.842275   11992 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0513 23:00:13.842275   11992 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0513 23:00:13.855571   11992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 23:00:13.871147   11992 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0513 23:00:13.879558   11992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0513 23:00:13.899813   11992 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0513 23:00:13.900458   11992 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0513 23:00:13.900458   11992 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0513 23:00:15.081118   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0513 23:00:15.088574   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0513 23:00:15.098760   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0513 23:00:15.099769   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0513 23:00:15.628406   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0513 23:00:15.637970   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0513 23:00:15.646995   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0513 23:00:15.646995   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0513 23:00:16.808169   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:00:16.832141   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0513 23:00:16.840678   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0513 23:00:16.846804   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0513 23:00:16.846804   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0513 23:00:17.393748   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0513 23:00:17.410092   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0513 23:00:17.440755   11992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 23:00:17.473050   11992 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0513 23:00:17.510522   11992 ssh_runner.go:195] Run: grep 172.23.111.254	control-plane.minikube.internal$ /etc/hosts
	I0513 23:00:17.517424   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:00:17.550027   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:00:17.744273   11992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:00:17.774186   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:00:17.774909   11992 start.go:316] joinCluster: &{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.
23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.108.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\j
enkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:00:17.774909   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0513 23:00:17.774909   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:00:19.725520   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:00:19.725587   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:19.725587   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:00:22.013064   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:00:22.013805   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:22.014019   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 23:00:22.226738   11992 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4516521s)
	I0513 23:00:22.226738   11992 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.23.108.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:00:22.226738   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n5djd1.506c2oeaejp22c1d --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-586300-m02 --control-plane --apiserver-advertise-address=172.23.108.68 --apiserver-bind-port=8443"
	I0513 23:01:02.741253   11992 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n5djd1.506c2oeaejp22c1d --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-586300-m02 --control-plane --apiserver-advertise-address=172.23.108.68 --apiserver-bind-port=8443": (40.5128976s)
	I0513 23:01:02.741332   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0513 23:01:03.463797   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-586300-m02 minikube.k8s.io/updated_at=2024_05_13T23_01_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=ha-586300 minikube.k8s.io/primary=false
	I0513 23:01:03.684131   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-586300-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0513 23:01:03.847148   11992 start.go:318] duration metric: took 46.0704007s to joinCluster
	I0513 23:01:03.847388   11992 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.108.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:01:03.850772   11992 out.go:177] * Verifying Kubernetes components...
	I0513 23:01:03.848379   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:01:03.862778   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:01:04.272017   11992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:01:04.306002   11992 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:01:04.307037   11992 kapi.go:59] client config for ha-586300: &rest.Config{Host:"https://172.23.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0513 23:01:04.307037   11992 kubeadm.go:477] Overriding stale ClientConfig host https://172.23.111.254:8443 with https://172.23.102.229:8443
	I0513 23:01:04.307992   11992 node_ready.go:35] waiting up to 6m0s for node "ha-586300-m02" to be "Ready" ...
	I0513 23:01:04.307992   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:04.307992   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:04.307992   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:04.307992   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:04.322756   11992 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0513 23:01:04.824116   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:04.824193   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:04.824193   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:04.824193   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:04.836422   11992 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0513 23:01:05.318663   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:05.318696   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:05.318696   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:05.318696   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:05.328726   11992 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0513 23:01:05.808544   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:05.808777   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:05.808777   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:05.808777   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:05.814209   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:06.313137   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:06.313200   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:06.313200   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:06.313200   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:06.317911   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:06.318588   11992 node_ready.go:53] node "ha-586300-m02" has status "Ready":"False"
	I0513 23:01:06.821279   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:06.821279   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:06.821279   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:06.821362   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:06.826011   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:07.314913   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:07.314913   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:07.314913   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:07.314913   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:07.319528   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:07.823280   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:07.823280   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:07.823280   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:07.823280   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:07.827866   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:08.319047   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:08.319047   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:08.319047   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:08.319047   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:08.631292   11992 round_trippers.go:574] Response Status: 200 OK in 312 milliseconds
	I0513 23:01:08.631921   11992 node_ready.go:53] node "ha-586300-m02" has status "Ready":"False"
	I0513 23:01:08.822614   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:08.822614   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:08.822614   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:08.822614   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:08.827064   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:09.312276   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:09.312374   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:09.312374   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:09.312374   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:09.317035   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:09.819485   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:09.819485   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:09.819485   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:09.819485   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:09.824333   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:10.317579   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:10.317579   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.317579   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.317579   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.322414   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:10.818299   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:10.818299   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.818299   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.818299   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.825888   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:01:10.827436   11992 node_ready.go:49] node "ha-586300-m02" has status "Ready":"True"
	I0513 23:01:10.827551   11992 node_ready.go:38] duration metric: took 6.5192998s for node "ha-586300-m02" to be "Ready" ...
	I0513 23:01:10.827606   11992 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:01:10.827724   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:01:10.827724   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.827724   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.827724   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.840007   11992 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0513 23:01:10.849223   11992 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.849223   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4qbhd
	I0513 23:01:10.849223   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.849223   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.849223   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.853297   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:10.854364   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:10.854364   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.854364   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.854364   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.858290   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.859217   11992 pod_ready.go:92] pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:10.859217   11992 pod_ready.go:81] duration metric: took 9.9937ms for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.859217   11992 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.859331   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wj8z7
	I0513 23:01:10.859370   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.859370   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.859370   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.862568   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.864055   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:10.864109   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.864109   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.864109   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.868063   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.869561   11992 pod_ready.go:92] pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:10.869647   11992 pod_ready.go:81] duration metric: took 10.4295ms for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.869647   11992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.869914   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300
	I0513 23:01:10.869914   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.869914   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.869914   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.873231   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.874701   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:10.874787   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.874787   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.874787   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.878367   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.879610   11992 pod_ready.go:92] pod "etcd-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:10.879610   11992 pod_ready.go:81] duration metric: took 9.9627ms for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.879683   11992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.879754   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:10.879793   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.879793   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.879793   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.883561   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.884841   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:10.884841   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.884841   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.884892   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.888707   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:11.392915   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:11.393007   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:11.393007   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:11.393007   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:11.400147   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:01:11.401023   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:11.401023   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:11.401023   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:11.401023   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:11.405957   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:11.892790   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:11.892863   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:11.892863   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:11.892863   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:11.897522   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:11.899057   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:11.899057   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:11.899057   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:11.899057   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:11.904191   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:12.390988   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:12.390988   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:12.390988   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:12.390988   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:12.398885   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:01:12.399951   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:12.399951   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:12.399951   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:12.399951   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:12.404548   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:12.890519   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:12.890603   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:12.890603   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:12.890603   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:12.896500   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:12.897684   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:12.897684   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:12.897684   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:12.897684   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:12.902251   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:12.904041   11992 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:01:13.390520   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:13.390520   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:13.390766   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:13.390766   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:13.395035   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:13.396915   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:13.397006   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:13.397006   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:13.397006   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:13.400181   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:13.890540   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:13.890640   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:13.890640   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:13.890719   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:13.895684   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:13.896782   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:13.896857   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:13.896857   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:13.896857   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:13.900997   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:14.392366   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:14.392453   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:14.392453   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:14.392453   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:14.396682   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:14.398198   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:14.398198   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:14.398198   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:14.398198   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:14.402168   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:14.890162   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:14.890547   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:14.890547   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:14.890547   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:14.895316   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:14.896237   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:14.896341   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:14.896341   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:14.896341   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:14.899638   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:15.391924   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:15.391924   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:15.391924   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:15.391924   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:15.396698   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:15.398453   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:15.398516   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:15.398516   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:15.398516   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:15.404029   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:15.405311   11992 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:01:15.888629   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:15.888719   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:15.888804   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:15.888804   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:15.893471   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:15.894469   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:15.894469   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:15.894469   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:15.894469   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:15.898582   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:16.391866   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:16.391946   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:16.391946   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:16.392023   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:16.397803   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:16.398996   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:16.398996   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:16.399071   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:16.399071   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:16.403153   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:16.888988   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:16.888988   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:16.888988   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:16.888988   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:16.893258   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:16.894492   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:16.894553   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:16.894553   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:16.894553   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:16.900351   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:17.392447   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:17.392447   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:17.392447   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:17.392447   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:17.396482   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:17.397906   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:17.397906   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:17.397906   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:17.397906   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:17.402655   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:17.893760   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:17.893857   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:17.893857   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:17.893857   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:17.902058   11992 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:01:17.903267   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:17.903329   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:17.903329   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:17.903329   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:17.906596   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:17.908177   11992 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:01:18.381210   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:18.381210   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.381294   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.381294   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.389357   11992 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:01:18.390375   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:18.390375   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.390409   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.390409   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.394536   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:18.886643   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:18.886717   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.886717   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.886717   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.895343   11992 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:01:18.896356   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:18.896356   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.896356   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.896356   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.902590   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:01:18.903417   11992 pod_ready.go:92] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:18.903417   11992 pod_ready.go:81] duration metric: took 8.0234142s for pod "etcd-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.903417   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.903417   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300
	I0513 23:01:18.903417   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.903417   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.903417   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.908034   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:18.909571   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:18.909599   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.909599   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.909599   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.913508   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:18.914873   11992 pod_ready.go:92] pod "kube-apiserver-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:18.914873   11992 pod_ready.go:81] duration metric: took 11.4558ms for pod "kube-apiserver-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.914873   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.914956   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m02
	I0513 23:01:18.915032   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.915032   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.915032   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.919147   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:18.919147   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:18.920489   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.920489   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.920489   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.923248   11992 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:01:18.923729   11992 pod_ready.go:92] pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:18.923729   11992 pod_ready.go:81] duration metric: took 8.8555ms for pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.923729   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.923729   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300
	I0513 23:01:18.923729   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.923729   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.923729   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.927884   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:18.927977   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:18.927977   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.927977   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.927977   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.932598   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:18.933372   11992 pod_ready.go:92] pod "kube-controller-manager-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:18.933372   11992 pod_ready.go:81] duration metric: took 9.6423ms for pod "kube-controller-manager-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.933372   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.933487   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300-m02
	I0513 23:01:18.933487   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.933487   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.933487   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.938742   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:18.939575   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:18.939575   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.939575   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.939575   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.943421   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:18.944625   11992 pod_ready.go:92] pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:18.944674   11992 pod_ready.go:81] duration metric: took 11.2482ms for pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.944729   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6mpjv" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:19.089651   11992 request.go:629] Waited for 144.5515ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6mpjv
	I0513 23:01:19.089737   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6mpjv
	I0513 23:01:19.089737   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:19.089737   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:19.089737   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:19.095805   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:19.292653   11992 request.go:629] Waited for 195.1762ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:19.292861   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:19.292861   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:19.292861   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:19.292861   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:19.298216   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:19.299332   11992 pod_ready.go:92] pod "kube-proxy-6mpjv" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:19.299332   11992 pod_ready.go:81] duration metric: took 354.5372ms for pod "kube-proxy-6mpjv" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:19.299332   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77zxb" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:19.497154   11992 request.go:629] Waited for 197.815ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77zxb
	I0513 23:01:19.497501   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77zxb
	I0513 23:01:19.497501   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:19.497501   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:19.497501   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:19.503173   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:19.687111   11992 request.go:629] Waited for 182.9406ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:19.687352   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:19.687461   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:19.687461   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:19.687461   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:19.691930   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:19.692295   11992 pod_ready.go:92] pod "kube-proxy-77zxb" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:19.692295   11992 pod_ready.go:81] duration metric: took 392.9482ms for pod "kube-proxy-77zxb" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:19.692295   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:19.888519   11992 request.go:629] Waited for 196.2158ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300
	I0513 23:01:19.888914   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300
	I0513 23:01:19.888914   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:19.888914   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:19.888914   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:19.895307   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:01:20.091167   11992 request.go:629] Waited for 194.2281ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:20.091283   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:20.091283   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.091283   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.091580   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.097060   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:20.097060   11992 pod_ready.go:92] pod "kube-scheduler-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:20.097592   11992 pod_ready.go:81] duration metric: took 405.2804ms for pod "kube-scheduler-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:20.097592   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:20.294339   11992 request.go:629] Waited for 196.7396ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m02
	I0513 23:01:20.294339   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m02
	I0513 23:01:20.294339   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.294339   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.294339   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.298758   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:20.499306   11992 request.go:629] Waited for 199.177ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:20.499628   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:20.499628   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.499718   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.499718   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.504910   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:20.505596   11992 pod_ready.go:92] pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:20.505596   11992 pod_ready.go:81] duration metric: took 407.9874ms for pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:20.505596   11992 pod_ready.go:38] duration metric: took 9.6776033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:01:20.505686   11992 api_server.go:52] waiting for apiserver process to appear ...
	I0513 23:01:20.513650   11992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:01:20.537895   11992 api_server.go:72] duration metric: took 16.689765s to wait for apiserver process to appear ...
	I0513 23:01:20.537895   11992 api_server.go:88] waiting for apiserver healthz status ...
	I0513 23:01:20.537895   11992 api_server.go:253] Checking apiserver healthz at https://172.23.102.229:8443/healthz ...
	I0513 23:01:20.545795   11992 api_server.go:279] https://172.23.102.229:8443/healthz returned 200:
	ok
	I0513 23:01:20.545890   11992 round_trippers.go:463] GET https://172.23.102.229:8443/version
	I0513 23:01:20.545890   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.546001   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.546001   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.550028   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:20.550028   11992 api_server.go:141] control plane version: v1.30.0
	I0513 23:01:20.550028   11992 api_server.go:131] duration metric: took 12.1328ms to wait for apiserver health ...
	I0513 23:01:20.550028   11992 system_pods.go:43] waiting for kube-system pods to appear ...
	I0513 23:01:20.700929   11992 request.go:629] Waited for 150.6961ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:01:20.701011   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:01:20.701206   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.701206   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.701206   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.727755   11992 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0513 23:01:20.734753   11992 system_pods.go:59] 17 kube-system pods found
	I0513 23:01:20.734753   11992 system_pods.go:61] "coredns-7db6d8ff4d-4qbhd" [6fa6abce-1f7c-4119-b74c-e4e2275f77f4] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "coredns-7db6d8ff4d-wj8z7" [21d8cc35-f37a-42b6-9e44-dfce810d1d51] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "etcd-ha-586300" [a1809532-311c-4f80-9236-fec7256f7b3c] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "etcd-ha-586300-m02" [37b3bba9-35b3-4723-b954-94c4f45c9b96] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kindnet-8hh55" [4fb9a98f-06d4-4333-89dc-b90c8b880f92] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kindnet-vddtk" [bf6e57db-8270-4024-ba93-abce11d81513] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-apiserver-ha-586300" [d6659d47-ce69-4334-a35c-7b66898b49de] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-apiserver-ha-586300-m02" [0b8839d5-3133-4d52-9264-9d998bc54617] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-controller-manager-ha-586300" [3416887d-320b-4417-b6ba-ffabb7b84885] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-controller-manager-ha-586300-m02" [eccf51fc-16b7-4d89-95ab-59ec4e8fbc8c] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-proxy-6mpjv" [0cd7eb37-2ff4-487e-b5e6-9d71c69a4814] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-proxy-77zxb" [bc2480b2-3de0-49c4-b84e-8ae7e85829a1] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-scheduler-ha-586300" [8bb322de-7dd8-4780-ae04-9d18a293aa0b] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-scheduler-ha-586300-m02" [c3bb6486-257a-4993-9127-34dada81473a] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-vip-ha-586300" [5dfa662f-0df1-485a-a52b-fdcd87e23145] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-vip-ha-586300-m02" [4372ac88-49f7-4dcd-9c13-1b8484817d28] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "storage-provisioner" [fc11360c-19a1-4d0b-966e-49946c8b0d47] Running
	I0513 23:01:20.734753   11992 system_pods.go:74] duration metric: took 184.7177ms to wait for pod list to return data ...
	I0513 23:01:20.734753   11992 default_sa.go:34] waiting for default service account to be created ...
	I0513 23:01:20.890784   11992 request.go:629] Waited for 155.7775ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/default/serviceaccounts
	I0513 23:01:20.891114   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/default/serviceaccounts
	I0513 23:01:20.891114   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.891114   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.891114   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.898491   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:01:20.898491   11992 default_sa.go:45] found service account: "default"
	I0513 23:01:20.898491   11992 default_sa.go:55] duration metric: took 163.732ms for default service account to be created ...
	I0513 23:01:20.898491   11992 system_pods.go:116] waiting for k8s-apps to be running ...
	I0513 23:01:21.094997   11992 request.go:629] Waited for 196.2701ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:01:21.094997   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:01:21.094997   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:21.094997   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:21.095123   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:21.103184   11992 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:01:21.109523   11992 system_pods.go:86] 17 kube-system pods found
	I0513 23:01:21.109592   11992 system_pods.go:89] "coredns-7db6d8ff4d-4qbhd" [6fa6abce-1f7c-4119-b74c-e4e2275f77f4] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "coredns-7db6d8ff4d-wj8z7" [21d8cc35-f37a-42b6-9e44-dfce810d1d51] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "etcd-ha-586300" [a1809532-311c-4f80-9236-fec7256f7b3c] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "etcd-ha-586300-m02" [37b3bba9-35b3-4723-b954-94c4f45c9b96] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kindnet-8hh55" [4fb9a98f-06d4-4333-89dc-b90c8b880f92] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kindnet-vddtk" [bf6e57db-8270-4024-ba93-abce11d81513] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-apiserver-ha-586300" [d6659d47-ce69-4334-a35c-7b66898b49de] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-apiserver-ha-586300-m02" [0b8839d5-3133-4d52-9264-9d998bc54617] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-controller-manager-ha-586300" [3416887d-320b-4417-b6ba-ffabb7b84885] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-controller-manager-ha-586300-m02" [eccf51fc-16b7-4d89-95ab-59ec4e8fbc8c] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-proxy-6mpjv" [0cd7eb37-2ff4-487e-b5e6-9d71c69a4814] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-proxy-77zxb" [bc2480b2-3de0-49c4-b84e-8ae7e85829a1] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-scheduler-ha-586300" [8bb322de-7dd8-4780-ae04-9d18a293aa0b] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-scheduler-ha-586300-m02" [c3bb6486-257a-4993-9127-34dada81473a] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-vip-ha-586300" [5dfa662f-0df1-485a-a52b-fdcd87e23145] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-vip-ha-586300-m02" [4372ac88-49f7-4dcd-9c13-1b8484817d28] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "storage-provisioner" [fc11360c-19a1-4d0b-966e-49946c8b0d47] Running
	I0513 23:01:21.109592   11992 system_pods.go:126] duration metric: took 211.0922ms to wait for k8s-apps to be running ...
	I0513 23:01:21.109592   11992 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 23:01:21.117516   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:01:21.142542   11992 system_svc.go:56] duration metric: took 32.9482ms WaitForService to wait for kubelet
	I0513 23:01:21.142641   11992 kubeadm.go:576] duration metric: took 17.2944876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:01:21.142641   11992 node_conditions.go:102] verifying NodePressure condition ...
	I0513 23:01:21.298495   11992 request.go:629] Waited for 155.5894ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes
	I0513 23:01:21.298495   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes
	I0513 23:01:21.298495   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:21.298495   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:21.298608   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:21.306173   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:01:21.307269   11992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:01:21.307269   11992 node_conditions.go:123] node cpu capacity is 2
	I0513 23:01:21.307269   11992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:01:21.307269   11992 node_conditions.go:123] node cpu capacity is 2
	I0513 23:01:21.307269   11992 node_conditions.go:105] duration metric: took 164.6215ms to run NodePressure ...
	I0513 23:01:21.307269   11992 start.go:240] waiting for startup goroutines ...
	I0513 23:01:21.307269   11992 start.go:254] writing updated cluster config ...
	I0513 23:01:21.311014   11992 out.go:177] 
	I0513 23:01:21.326682   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:01:21.326682   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 23:01:21.332189   11992 out.go:177] * Starting "ha-586300-m03" control-plane node in "ha-586300" cluster
	I0513 23:01:21.335224   11992 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:01:21.335224   11992 cache.go:56] Caching tarball of preloaded images
	I0513 23:01:21.335852   11992 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 23:01:21.335884   11992 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 23:01:21.335884   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 23:01:21.342120   11992 start.go:360] acquireMachinesLock for ha-586300-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 23:01:21.342120   11992 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-586300-m03"
	I0513 23:01:21.342120   11992 start.go:93] Provisioning new machine with config: &{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:def
ault APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.108.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:01:21.342120   11992 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0513 23:01:21.345591   11992 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 23:01:21.345961   11992 start.go:159] libmachine.API.Create for "ha-586300" (driver="hyperv")
	I0513 23:01:21.345995   11992 client.go:168] LocalClient.Create starting
	I0513 23:01:21.346381   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0513 23:01:21.346409   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 23:01:21.346409   11992 main.go:141] libmachine: Parsing certificate...
	I0513 23:01:21.346409   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0513 23:01:21.346409   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 23:01:21.346409   11992 main.go:141] libmachine: Parsing certificate...
	I0513 23:01:21.346952   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0513 23:01:23.083890   11992 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0513 23:01:23.083890   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:23.084005   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0513 23:01:24.639184   11992 main.go:141] libmachine: [stdout =====>] : False
	
	I0513 23:01:24.639716   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:24.639716   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 23:01:25.964660   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 23:01:25.964660   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:25.965308   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 23:01:29.281155   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 23:01:29.281155   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:29.282732   11992 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0513 23:01:29.593834   11992 main.go:141] libmachine: Creating SSH key...
	I0513 23:01:29.731958   11992 main.go:141] libmachine: Creating VM...
	I0513 23:01:29.732952   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 23:01:32.334634   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 23:01:32.334634   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:32.334722   11992 main.go:141] libmachine: Using switch "Default Switch"
	I0513 23:01:32.334808   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 23:01:33.921718   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 23:01:33.921806   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:33.921806   11992 main.go:141] libmachine: Creating VHD
	I0513 23:01:33.921806   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0513 23:01:37.448100   11992 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DECC4003-BBC9-4CBF-844E-AF81776EB307
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0513 23:01:37.448100   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:37.448100   11992 main.go:141] libmachine: Writing magic tar header
	I0513 23:01:37.449095   11992 main.go:141] libmachine: Writing SSH key tar header
	I0513 23:01:37.459091   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0513 23:01:40.427049   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:40.427858   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:40.427858   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\disk.vhd' -SizeBytes 20000MB
	I0513 23:01:42.765170   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:42.765170   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:42.765170   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-586300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0513 23:01:46.022543   11992 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-586300-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0513 23:01:46.023465   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:46.023568   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-586300-m03 -DynamicMemoryEnabled $false
	I0513 23:01:48.072299   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:48.072384   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:48.072465   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-586300-m03 -Count 2
	I0513 23:01:50.089656   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:50.090506   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:50.090684   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-586300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\boot2docker.iso'
	I0513 23:01:52.414416   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:52.414924   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:52.415074   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-586300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\disk.vhd'
	I0513 23:01:54.778291   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:54.778291   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:54.778291   11992 main.go:141] libmachine: Starting VM...
	I0513 23:01:54.778674   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-586300-m03
	I0513 23:01:57.638972   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:57.639785   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:57.639785   11992 main.go:141] libmachine: Waiting for host to start...
	I0513 23:01:57.639831   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:01:59.668007   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:01:59.668825   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:59.668825   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:01.925456   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:02:01.925456   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:02.925523   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:04.907572   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:04.907961   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:04.908056   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:07.181920   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:02:07.181920   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:08.191255   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:10.137087   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:10.137087   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:10.137164   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:12.396002   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:02:12.396002   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:13.396954   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:15.388418   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:15.388418   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:15.388418   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:17.673950   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:02:17.673950   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:18.687274   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:20.678981   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:20.679164   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:20.679164   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:23.058989   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:23.059022   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:23.059093   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:25.001432   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:25.001432   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:25.001432   11992 machine.go:94] provisionDockerMachine start ...
	I0513 23:02:25.001618   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:26.941534   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:26.942247   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:26.942247   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:29.247392   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:29.247392   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:29.251796   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:02:29.252096   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:02:29.252096   11992 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 23:02:29.383765   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 23:02:29.383848   11992 buildroot.go:166] provisioning hostname "ha-586300-m03"
	I0513 23:02:29.383848   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:31.319577   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:31.319883   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:31.319883   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:33.570688   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:33.570688   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:33.574995   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:02:33.575391   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:02:33.575463   11992 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-586300-m03 && echo "ha-586300-m03" | sudo tee /etc/hostname
	I0513 23:02:33.746483   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-586300-m03
	
	I0513 23:02:33.746483   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:35.646184   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:35.646184   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:35.646263   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:37.961568   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:37.961889   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:37.965584   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:02:37.966102   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:02:37.966102   11992 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-586300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-586300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-586300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 23:02:38.111516   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 23:02:38.111597   11992 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 23:02:38.111661   11992 buildroot.go:174] setting up certificates
	I0513 23:02:38.111661   11992 provision.go:84] configureAuth start
	I0513 23:02:38.111733   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:40.003471   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:40.004441   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:40.004536   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:42.266168   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:42.266168   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:42.266168   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:44.175565   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:44.176044   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:44.176044   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:46.473899   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:46.474407   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:46.474407   11992 provision.go:143] copyHostCerts
	I0513 23:02:46.474545   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0513 23:02:46.474830   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0513 23:02:46.474830   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0513 23:02:46.475239   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 23:02:46.476165   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0513 23:02:46.476576   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0513 23:02:46.476576   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0513 23:02:46.476774   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 23:02:46.477836   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0513 23:02:46.478167   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0513 23:02:46.478239   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0513 23:02:46.478713   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 23:02:46.479451   11992 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-586300-m03 san=[127.0.0.1 172.23.109.129 ha-586300-m03 localhost minikube]
	I0513 23:02:46.604874   11992 provision.go:177] copyRemoteCerts
	I0513 23:02:46.612818   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 23:02:46.612818   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:48.545088   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:48.545088   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:48.545356   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:50.879996   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:50.880488   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:50.880792   11992 sshutil.go:53] new ssh client: &{IP:172.23.109.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\id_rsa Username:docker}
	I0513 23:02:50.992674   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3796833s)
	I0513 23:02:50.992674   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 23:02:50.993208   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0513 23:02:51.036147   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 23:02:51.036147   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 23:02:51.083622   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 23:02:51.083892   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 23:02:51.130883   11992 provision.go:87] duration metric: took 13.0186466s to configureAuth
	I0513 23:02:51.130883   11992 buildroot.go:189] setting minikube options for container-runtime
	I0513 23:02:51.131086   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:02:51.131630   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:53.038183   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:53.038372   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:53.038451   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:55.339732   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:55.339781   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:55.342960   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:02:55.343560   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:02:55.343560   11992 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 23:02:55.475089   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 23:02:55.475089   11992 buildroot.go:70] root file system type: tmpfs
	I0513 23:02:55.476069   11992 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 23:02:55.476069   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:57.380093   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:57.380309   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:57.380309   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:59.720715   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:59.721090   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:59.724994   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:02:59.725229   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:02:59.725229   11992 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.102.229"
	Environment="NO_PROXY=172.23.102.229,172.23.108.68"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 23:02:59.885592   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.102.229
	Environment=NO_PROXY=172.23.102.229,172.23.108.68
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 23:02:59.885592   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:01.821417   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:01.821417   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:01.821498   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:04.160541   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:04.160541   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:04.165151   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:03:04.165489   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:03:04.165564   11992 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 23:03:06.285964   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 23:03:06.286053   11992 machine.go:97] duration metric: took 41.2829883s to provisionDockerMachine
	I0513 23:03:06.286053   11992 client.go:171] duration metric: took 1m44.9358552s to LocalClient.Create
	I0513 23:03:06.286118   11992 start.go:167] duration metric: took 1m44.9359933s to libmachine.API.Create "ha-586300"
	I0513 23:03:06.286118   11992 start.go:293] postStartSetup for "ha-586300-m03" (driver="hyperv")
	I0513 23:03:06.286256   11992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 23:03:06.294858   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 23:03:06.294858   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:08.278888   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:08.279036   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:08.279036   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:10.647475   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:10.648523   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:10.648577   11992 sshutil.go:53] new ssh client: &{IP:172.23.109.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\id_rsa Username:docker}
	I0513 23:03:10.769205   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4741709s)
	I0513 23:03:10.778279   11992 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 23:03:10.785301   11992 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 23:03:10.785392   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 23:03:10.785694   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 23:03:10.786380   11992 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0513 23:03:10.786380   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0513 23:03:10.795035   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 23:03:10.812928   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0513 23:03:10.858558   11992 start.go:296] duration metric: took 4.5721209s for postStartSetup
	I0513 23:03:10.860972   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:12.839515   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:12.839953   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:12.839953   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:15.137463   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:15.137463   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:15.138090   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 23:03:15.140750   11992 start.go:128] duration metric: took 1m53.7941169s to createHost
	I0513 23:03:15.140852   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:17.052586   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:17.052586   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:17.053216   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:19.367766   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:19.367766   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:19.371997   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:03:19.372433   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:03:19.372433   11992 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 23:03:19.509989   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715641399.682594913
	
	I0513 23:03:19.509989   11992 fix.go:216] guest clock: 1715641399.682594913
	I0513 23:03:19.509989   11992 fix.go:229] Guest: 2024-05-13 23:03:19.682594913 +0000 UTC Remote: 2024-05-13 23:03:15.1407505 +0000 UTC m=+515.208189301 (delta=4.541844413s)
	I0513 23:03:19.510528   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:21.409957   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:21.409957   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:21.410041   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:23.703614   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:23.703614   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:23.707590   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:03:23.707799   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:03:23.707799   11992 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715641399
	I0513 23:03:23.856021   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 23:03:19 UTC 2024
	
	I0513 23:03:23.856132   11992 fix.go:236] clock set: Mon May 13 23:03:19 UTC 2024
	 (err=<nil>)
	I0513 23:03:23.856132   11992 start.go:83] releasing machines lock for "ha-586300-m03", held for 2m2.5091546s
	I0513 23:03:23.856275   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:25.793336   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:25.793336   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:25.793336   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:28.121905   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:28.121905   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:28.125397   11992 out.go:177] * Found network options:
	I0513 23:03:28.127890   11992 out.go:177]   - NO_PROXY=172.23.102.229,172.23.108.68
	W0513 23:03:28.129707   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 23:03:28.129707   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	I0513 23:03:28.131849   11992 out.go:177]   - NO_PROXY=172.23.102.229,172.23.108.68
	W0513 23:03:28.135272   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 23:03:28.135384   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 23:03:28.137846   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 23:03:28.137846   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	I0513 23:03:28.139697   11992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 23:03:28.139697   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:28.146844   11992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0513 23:03:28.146844   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:30.124156   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:30.124156   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:30.124349   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:30.147482   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:30.147482   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:30.147482   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:32.510188   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:32.510188   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:32.510469   11992 sshutil.go:53] new ssh client: &{IP:172.23.109.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\id_rsa Username:docker}
	I0513 23:03:32.537168   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:32.537255   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:32.537771   11992 sshutil.go:53] new ssh client: &{IP:172.23.109.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\id_rsa Username:docker}
	I0513 23:03:32.691094   11992 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5439906s)
	I0513 23:03:32.691162   11992 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5512172s)
	W0513 23:03:32.691162   11992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 23:03:32.699962   11992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 23:03:32.728308   11992 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 23:03:32.728308   11992 start.go:494] detecting cgroup driver to use...
	I0513 23:03:32.728308   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:03:32.772226   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 23:03:32.804831   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 23:03:32.826837   11992 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 23:03:32.837151   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 23:03:32.864040   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:03:32.896905   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 23:03:32.924026   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:03:32.961588   11992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 23:03:32.996868   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 23:03:33.026582   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 23:03:33.052584   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 23:03:33.081314   11992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 23:03:33.109916   11992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 23:03:33.136818   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:03:33.312615   11992 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 23:03:33.343531   11992 start.go:494] detecting cgroup driver to use...
	I0513 23:03:33.352386   11992 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 23:03:33.383406   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:03:33.413864   11992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 23:03:33.450055   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:03:33.480675   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:03:33.512385   11992 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 23:03:33.567171   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:03:33.590983   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:03:33.635608   11992 ssh_runner.go:195] Run: which cri-dockerd
	I0513 23:03:33.650594   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 23:03:33.671225   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 23:03:33.711697   11992 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 23:03:33.891985   11992 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 23:03:34.056859   11992 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 23:03:34.056859   11992 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 23:03:34.095674   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:03:34.277063   11992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 23:03:36.788096   11992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5109342s)
	I0513 23:03:36.797029   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 23:03:36.834629   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:03:36.864936   11992 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 23:03:37.058361   11992 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 23:03:37.257096   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:03:37.447902   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 23:03:37.485604   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:03:37.517731   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:03:37.704688   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 23:03:37.810519   11992 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 23:03:37.822568   11992 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 23:03:37.830126   11992 start.go:562] Will wait 60s for crictl version
	I0513 23:03:37.838770   11992 ssh_runner.go:195] Run: which crictl
	I0513 23:03:37.861035   11992 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 23:03:37.915612   11992 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 23:03:37.923611   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:03:37.966270   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:03:37.999973   11992 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 23:03:38.004306   11992 out.go:177]   - env NO_PROXY=172.23.102.229
	I0513 23:03:38.007563   11992 out.go:177]   - env NO_PROXY=172.23.102.229,172.23.108.68
	I0513 23:03:38.010575   11992 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 23:03:38.014330   11992 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 23:03:38.015329   11992 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 23:03:38.015329   11992 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 23:03:38.015329   11992 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 23:03:38.017063   11992 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 23:03:38.017063   11992 ip.go:210] interface addr: 172.23.96.1/20
	I0513 23:03:38.028728   11992 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 23:03:38.035142   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:03:38.061264   11992 mustload.go:65] Loading cluster: ha-586300
	I0513 23:03:38.061786   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:03:38.062003   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:03:40.006439   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:40.007238   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:40.007238   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:03:40.007382   11992 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300 for IP: 172.23.109.129
	I0513 23:03:40.007382   11992 certs.go:194] generating shared ca certs ...
	I0513 23:03:40.007382   11992 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:03:40.008337   11992 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 23:03:40.008603   11992 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 23:03:40.008697   11992 certs.go:256] generating profile certs ...
	I0513 23:03:40.009260   11992 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key
	I0513 23:03:40.009333   11992 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.53a5741f
	I0513 23:03:40.009430   11992 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.53a5741f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.102.229 172.23.108.68 172.23.109.129 172.23.111.254]
	I0513 23:03:40.148115   11992 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.53a5741f ...
	I0513 23:03:40.148115   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.53a5741f: {Name:mk28c00991499451c4a682477df67fc5ce29b66c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:03:40.150112   11992 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.53a5741f ...
	I0513 23:03:40.150112   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.53a5741f: {Name:mk10a0e3613314d7e3609376ac35f790fbf46370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:03:40.150468   11992 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.53a5741f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt
	I0513 23:03:40.164561   11992 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.53a5741f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key
	I0513 23:03:40.165557   11992 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key
	I0513 23:03:40.165557   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 23:03:40.165557   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 23:03:40.165557   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 23:03:40.165557   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 23:03:40.166564   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0513 23:03:40.166564   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0513 23:03:40.166564   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0513 23:03:40.166564   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0513 23:03:40.167920   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0513 23:03:40.168272   11992 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0513 23:03:40.168371   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 23:03:40.168527   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 23:03:40.168527   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 23:03:40.169069   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 23:03:40.169585   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0513 23:03:40.169774   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:03:40.169964   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0513 23:03:40.170135   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0513 23:03:40.170135   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:03:42.139887   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:42.139887   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:42.140027   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:44.536089   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:03:44.536089   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:44.536089   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 23:03:44.644441   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0513 23:03:44.652312   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0513 23:03:44.680447   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0513 23:03:44.687343   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0513 23:03:44.714890   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0513 23:03:44.722701   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0513 23:03:44.749215   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0513 23:03:44.755490   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0513 23:03:44.783327   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0513 23:03:44.789739   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0513 23:03:44.817169   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0513 23:03:44.823471   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0513 23:03:44.843825   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 23:03:44.891578   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 23:03:44.937727   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 23:03:44.983143   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 23:03:45.028970   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0513 23:03:45.076500   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0513 23:03:45.124489   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 23:03:45.174081   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0513 23:03:45.219276   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 23:03:45.266676   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0513 23:03:45.316744   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0513 23:03:45.361143   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0513 23:03:45.390832   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0513 23:03:45.423697   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0513 23:03:45.454275   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0513 23:03:45.488020   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0513 23:03:45.518417   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0513 23:03:45.551122   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0513 23:03:45.596609   11992 ssh_runner.go:195] Run: openssl version
	I0513 23:03:45.613353   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0513 23:03:45.644260   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0513 23:03:45.650743   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 23:03:45.661386   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0513 23:03:45.678597   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 23:03:45.709014   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 23:03:45.735579   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:03:45.742754   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:03:45.750554   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:03:45.769896   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 23:03:45.796869   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0513 23:03:45.830663   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0513 23:03:45.837116   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 23:03:45.845371   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0513 23:03:45.864544   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0513 23:03:45.898702   11992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 23:03:45.904992   11992 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 23:03:45.904992   11992 kubeadm.go:928] updating node {m03 172.23.109.129 8443 v1.30.0 docker true true} ...
	I0513 23:03:45.904992   11992 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-586300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.109.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 23:03:45.904992   11992 kube-vip.go:115] generating kube-vip config ...
	I0513 23:03:45.913605   11992 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0513 23:03:45.940720   11992 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0513 23:03:45.940720   11992 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0513 23:03:45.949012   11992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 23:03:45.968221   11992 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0513 23:03:45.977770   11992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0513 23:03:45.995542   11992 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0513 23:03:45.995542   11992 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0513 23:03:45.995542   11992 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0513 23:03:45.995542   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0513 23:03:45.996084   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0513 23:03:46.009482   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0513 23:03:46.009482   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0513 23:03:46.010689   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:03:46.016039   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0513 23:03:46.016629   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0513 23:03:46.052703   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0513 23:03:46.052801   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0513 23:03:46.052905   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0513 23:03:46.063056   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0513 23:03:46.130328   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0513 23:03:46.130430   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0513 23:03:47.195166   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0513 23:03:47.213268   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0513 23:03:47.245200   11992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 23:03:47.276900   11992 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0513 23:03:47.319019   11992 ssh_runner.go:195] Run: grep 172.23.111.254	control-plane.minikube.internal$ /etc/hosts
	I0513 23:03:47.326581   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:03:47.357569   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:03:47.555814   11992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:03:47.594834   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:03:47.595526   11992 start.go:316] joinCluster: &{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.
23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.108.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.23.109.129 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:03:47.595672   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0513 23:03:47.595739   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:03:49.539057   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:49.539880   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:49.539964   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:51.886042   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:03:51.886042   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:51.886999   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 23:03:52.111019   11992 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5151698s)
	I0513 23:03:52.111250   11992 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.23.109.129 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:03:52.111324   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7xcf6m.hzv0vmsdgs1e9s3x --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-586300-m03 --control-plane --apiserver-advertise-address=172.23.109.129 --apiserver-bind-port=8443"
	I0513 23:04:35.498305   11992 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7xcf6m.hzv0vmsdgs1e9s3x --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-586300-m03 --control-plane --apiserver-advertise-address=172.23.109.129 --apiserver-bind-port=8443": (43.3851762s)
	I0513 23:04:35.498378   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0513 23:04:36.271743   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-586300-m03 minikube.k8s.io/updated_at=2024_05_13T23_04_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=ha-586300 minikube.k8s.io/primary=false
	I0513 23:04:36.443057   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-586300-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0513 23:04:36.599343   11992 start.go:318] duration metric: took 49.001961s to joinCluster
	I0513 23:04:36.599460   11992 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.23.109.129 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:04:36.603316   11992 out.go:177] * Verifying Kubernetes components...
	I0513 23:04:36.600510   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:04:36.615543   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:04:37.004731   11992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:04:37.053713   11992 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:04:37.053713   11992 kapi.go:59] client config for ha-586300: &rest.Config{Host:"https://172.23.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0513 23:04:37.053713   11992 kubeadm.go:477] Overriding stale ClientConfig host https://172.23.111.254:8443 with https://172.23.102.229:8443
	I0513 23:04:37.054718   11992 node_ready.go:35] waiting up to 6m0s for node "ha-586300-m03" to be "Ready" ...
	I0513 23:04:37.054718   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:37.054718   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:37.054718   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:37.054718   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:37.070156   11992 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0513 23:04:37.564635   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:37.564635   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:37.564635   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:37.564635   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:37.568214   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:38.055679   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:38.055679   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:38.055679   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:38.055679   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:38.063944   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:04:38.560907   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:38.560907   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:38.560907   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:38.560907   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:38.568665   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:04:39.068482   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:39.068482   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:39.068482   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:39.068482   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:39.075083   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:04:39.076533   11992 node_ready.go:53] node "ha-586300-m03" has status "Ready":"False"
	I0513 23:04:39.557119   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:39.557335   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:39.557335   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:39.557335   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:39.578560   11992 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0513 23:04:40.060304   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:40.060304   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:40.060304   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:40.060304   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:40.065323   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:40.568727   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:40.568727   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:40.568727   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:40.568825   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:40.571341   11992 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:04:41.057195   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:41.057248   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:41.057248   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:41.057248   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:41.065729   11992 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:04:41.557705   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:41.557861   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:41.557861   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:41.557861   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:41.564466   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:04:41.565459   11992 node_ready.go:53] node "ha-586300-m03" has status "Ready":"False"
	I0513 23:04:42.062638   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:42.062772   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:42.062772   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:42.062772   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:42.066697   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:42.563736   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:42.564133   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:42.564133   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:42.564218   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:42.569442   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:43.057546   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:43.057603   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.057661   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.057720   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.069762   11992 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0513 23:04:43.070434   11992 node_ready.go:49] node "ha-586300-m03" has status "Ready":"True"
	I0513 23:04:43.070434   11992 node_ready.go:38] duration metric: took 6.0154805s for node "ha-586300-m03" to be "Ready" ...
	I0513 23:04:43.070434   11992 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:04:43.070571   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:04:43.070571   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.070648   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.070648   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.082361   11992 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0513 23:04:43.090435   11992 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.090435   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4qbhd
	I0513 23:04:43.090435   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.090435   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.090435   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.094369   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:43.095374   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:43.095374   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.095374   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.095374   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.099374   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:43.100374   11992 pod_ready.go:92] pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:43.100374   11992 pod_ready.go:81] duration metric: took 9.9389ms for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.100374   11992 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.100374   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wj8z7
	I0513 23:04:43.100374   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.100374   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.100374   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.104368   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:43.104368   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:43.104368   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.105437   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.105437   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.111362   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:43.112353   11992 pod_ready.go:92] pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:43.112353   11992 pod_ready.go:81] duration metric: took 11.9788ms for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.112353   11992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.112353   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300
	I0513 23:04:43.112353   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.112353   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.112353   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.118346   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:43.119547   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:43.119547   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.119547   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.119547   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.123057   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:43.125877   11992 pod_ready.go:92] pod "etcd-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:43.125949   11992 pod_ready.go:81] duration metric: took 13.5958ms for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.126009   11992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.126142   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:04:43.126142   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.126142   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.126142   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.129366   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:43.130366   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:43.130366   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.130366   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.130366   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.133368   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:43.134363   11992 pod_ready.go:92] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:43.134363   11992 pod_ready.go:81] duration metric: took 8.3538ms for pod "etcd-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.134363   11992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.262567   11992 request.go:629] Waited for 128.1618ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:43.262651   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:43.262651   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.262651   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.262651   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.268409   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:43.466297   11992 request.go:629] Waited for 196.8581ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:43.466500   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:43.466500   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.466580   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.466580   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.471873   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:43.672837   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:43.672938   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.672938   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.672938   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.678011   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:43.859711   11992 request.go:629] Waited for 180.2859ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:43.859821   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:43.859821   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.860044   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.860044   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.865613   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:44.140211   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:44.140298   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:44.140298   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:44.140298   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:44.147322   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:04:44.264537   11992 request.go:629] Waited for 115.8502ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:44.264845   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:44.264845   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:44.264933   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:44.264933   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:44.270590   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:44.638343   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:44.638343   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:44.638343   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:44.638343   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:44.644923   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:04:44.669130   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:44.669130   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:44.669130   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:44.669450   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:44.672596   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:45.137750   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:45.137750   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.137750   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.137750   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.157212   11992 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0513 23:04:45.157903   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:45.158003   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.158003   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.158003   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.161196   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:45.162762   11992 pod_ready.go:102] pod "etcd-ha-586300-m03" in "kube-system" namespace has status "Ready":"False"
	I0513 23:04:45.636780   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:45.636980   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.636980   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.636980   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.640245   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:45.641761   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:45.641838   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.641838   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.641838   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.644940   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:45.645834   11992 pod_ready.go:92] pod "etcd-ha-586300-m03" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:45.645935   11992 pod_ready.go:81] duration metric: took 2.5114732s for pod "etcd-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:45.645935   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:45.667276   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300
	I0513 23:04:45.667276   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.667276   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.667276   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.677992   11992 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0513 23:04:45.871076   11992 request.go:629] Waited for 192.2872ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:45.871197   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:45.871365   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.871365   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.871365   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.876769   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:45.878173   11992 pod_ready.go:92] pod "kube-apiserver-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:45.878281   11992 pod_ready.go:81] duration metric: took 232.2294ms for pod "kube-apiserver-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:45.878281   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:46.072347   11992 request.go:629] Waited for 193.8518ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m02
	I0513 23:04:46.072347   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m02
	I0513 23:04:46.072347   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:46.072347   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:46.072347   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:46.077268   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:46.263166   11992 request.go:629] Waited for 183.9502ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:46.263559   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:46.263559   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:46.263559   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:46.263559   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:46.268862   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:46.269607   11992 pod_ready.go:92] pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:46.269720   11992 pod_ready.go:81] duration metric: took 391.4232ms for pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:46.269720   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:46.463470   11992 request.go:629] Waited for 193.7425ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:46.463470   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:46.463470   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:46.463470   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:46.463729   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:46.470471   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:04:46.665400   11992 request.go:629] Waited for 193.4979ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:46.665698   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:46.665698   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:46.665698   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:46.665698   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:46.670458   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:46.870757   11992 request.go:629] Waited for 93.5191ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:46.871001   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:46.871001   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:46.871109   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:46.871109   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:46.875768   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:47.058735   11992 request.go:629] Waited for 181.0397ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.059077   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.059169   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:47.059169   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:47.059169   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:47.063545   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:47.274723   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:47.274723   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:47.274723   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:47.274831   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:47.280267   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:47.459963   11992 request.go:629] Waited for 176.7155ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.459963   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.459963   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:47.459963   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:47.459963   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:47.464600   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:47.773681   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:47.773681   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:47.773681   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:47.773681   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:47.779445   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:47.867743   11992 request.go:629] Waited for 87.2819ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.867944   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.868057   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:47.868057   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:47.868057   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:47.874224   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:48.274301   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:48.274301   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:48.274301   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:48.274301   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:48.279442   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:48.280919   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:48.280981   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:48.280981   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:48.280981   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:48.284450   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:48.285612   11992 pod_ready.go:102] pod "kube-apiserver-ha-586300-m03" in "kube-system" namespace has status "Ready":"False"
	I0513 23:04:48.779396   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:48.779396   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:48.779396   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:48.779396   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:48.782969   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:48.784308   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:48.784308   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:48.784308   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:48.784308   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:48.788456   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:48.790095   11992 pod_ready.go:92] pod "kube-apiserver-ha-586300-m03" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:48.790149   11992 pod_ready.go:81] duration metric: took 2.5203078s for pod "kube-apiserver-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:48.790149   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:48.872612   11992 request.go:629] Waited for 82.4601ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300
	I0513 23:04:48.872877   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300
	I0513 23:04:48.872877   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:48.872877   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:48.872877   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:48.887490   11992 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0513 23:04:49.059927   11992 request.go:629] Waited for 171.3573ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:49.060110   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:49.060110   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:49.060110   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:49.060172   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:49.063488   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:49.065187   11992 pod_ready.go:92] pod "kube-controller-manager-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:49.065263   11992 pod_ready.go:81] duration metric: took 275.1031ms for pod "kube-controller-manager-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:49.065263   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:49.265580   11992 request.go:629] Waited for 200.157ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300-m02
	I0513 23:04:49.265916   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300-m02
	I0513 23:04:49.265916   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:49.265916   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:49.265916   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:49.270289   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:49.469526   11992 request.go:629] Waited for 197.6993ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:49.469657   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:49.469719   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:49.469719   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:49.469807   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:49.475058   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:49.476117   11992 pod_ready.go:92] pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:49.476117   11992 pod_ready.go:81] duration metric: took 410.838ms for pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:49.476117   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:49.672419   11992 request.go:629] Waited for 196.1296ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300-m03
	I0513 23:04:49.672419   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300-m03
	I0513 23:04:49.672419   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:49.672419   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:49.672419   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:49.677016   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:49.861399   11992 request.go:629] Waited for 182.8773ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:49.861399   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:49.861399   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:49.861399   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:49.861399   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:49.866016   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:49.866016   11992 pod_ready.go:92] pod "kube-controller-manager-ha-586300-m03" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:49.866016   11992 pod_ready.go:81] duration metric: took 389.8836ms for pod "kube-controller-manager-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:49.866016   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2tqlw" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:50.064581   11992 request.go:629] Waited for 198.4283ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tqlw
	I0513 23:04:50.064676   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tqlw
	I0513 23:04:50.064676   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:50.064676   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:50.064676   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:50.075617   11992 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0513 23:04:50.271318   11992 request.go:629] Waited for 192.4575ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:50.271677   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:50.271677   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:50.271677   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:50.271677   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:50.277754   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:50.278308   11992 pod_ready.go:92] pod "kube-proxy-2tqlw" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:50.278408   11992 pod_ready.go:81] duration metric: took 412.3767ms for pod "kube-proxy-2tqlw" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:50.278408   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6mpjv" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:50.460514   11992 request.go:629] Waited for 182.0316ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6mpjv
	I0513 23:04:50.460719   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6mpjv
	I0513 23:04:50.460719   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:50.460719   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:50.460719   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:50.468546   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:04:50.667493   11992 request.go:629] Waited for 197.7412ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:50.667662   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:50.667662   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:50.667662   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:50.667662   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:50.671986   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:50.673543   11992 pod_ready.go:92] pod "kube-proxy-6mpjv" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:50.673543   11992 pod_ready.go:81] duration metric: took 395.1195ms for pod "kube-proxy-6mpjv" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:50.673621   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77zxb" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:50.871788   11992 request.go:629] Waited for 198.0932ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77zxb
	I0513 23:04:50.871788   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77zxb
	I0513 23:04:50.871980   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:50.871980   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:50.871980   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:50.876397   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:51.061548   11992 request.go:629] Waited for 183.0864ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:51.061929   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:51.062101   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:51.062101   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:51.062101   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:51.066933   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:51.068422   11992 pod_ready.go:92] pod "kube-proxy-77zxb" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:51.068422   11992 pod_ready.go:81] duration metric: took 394.7847ms for pod "kube-proxy-77zxb" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:51.068422   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:51.267021   11992 request.go:629] Waited for 197.9177ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300
	I0513 23:04:51.267271   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300
	I0513 23:04:51.267340   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:51.267409   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:51.267434   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:51.272539   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:51.470269   11992 request.go:629] Waited for 196.661ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:51.470269   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:51.470269   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:51.470269   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:51.470269   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:51.476105   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:51.477231   11992 pod_ready.go:92] pod "kube-scheduler-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:51.477340   11992 pod_ready.go:81] duration metric: took 408.9021ms for pod "kube-scheduler-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:51.477340   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:51.673228   11992 request.go:629] Waited for 195.6459ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m02
	I0513 23:04:51.673381   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m02
	I0513 23:04:51.673599   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:51.673685   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:51.673685   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:51.682788   11992 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 23:04:51.859659   11992 request.go:629] Waited for 176.6124ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:51.859659   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:51.859659   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:51.859659   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:51.859659   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:51.863659   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:51.864659   11992 pod_ready.go:92] pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:51.864659   11992 pod_ready.go:81] duration metric: took 387.3039ms for pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:51.864659   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:52.060640   11992 request.go:629] Waited for 195.9733ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m03
	I0513 23:04:52.060640   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m03
	I0513 23:04:52.060640   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.060865   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.060865   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.065098   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:52.264257   11992 request.go:629] Waited for 197.7205ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:52.264328   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:52.264328   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.264328   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.264328   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.267797   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:52.268292   11992 pod_ready.go:92] pod "kube-scheduler-ha-586300-m03" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:52.268292   11992 pod_ready.go:81] duration metric: took 403.6178ms for pod "kube-scheduler-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:52.268292   11992 pod_ready.go:38] duration metric: took 9.1974978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:04:52.268292   11992 api_server.go:52] waiting for apiserver process to appear ...
	I0513 23:04:52.277758   11992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:04:52.302829   11992 api_server.go:72] duration metric: took 15.70265s to wait for apiserver process to appear ...
	I0513 23:04:52.303351   11992 api_server.go:88] waiting for apiserver healthz status ...
	I0513 23:04:52.303351   11992 api_server.go:253] Checking apiserver healthz at https://172.23.102.229:8443/healthz ...
	I0513 23:04:52.312025   11992 api_server.go:279] https://172.23.102.229:8443/healthz returned 200:
	ok
	I0513 23:04:52.312886   11992 round_trippers.go:463] GET https://172.23.102.229:8443/version
	I0513 23:04:52.312886   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.312987   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.312987   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.314043   11992 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 23:04:52.314557   11992 api_server.go:141] control plane version: v1.30.0
	I0513 23:04:52.314557   11992 api_server.go:131] duration metric: took 11.2056ms to wait for apiserver health ...
	I0513 23:04:52.314557   11992 system_pods.go:43] waiting for kube-system pods to appear ...
	I0513 23:04:52.467102   11992 request.go:629] Waited for 152.4249ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:04:52.467102   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:04:52.467102   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.467429   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.467429   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.480662   11992 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0513 23:04:52.490874   11992 system_pods.go:59] 24 kube-system pods found
	I0513 23:04:52.490874   11992 system_pods.go:61] "coredns-7db6d8ff4d-4qbhd" [6fa6abce-1f7c-4119-b74c-e4e2275f77f4] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "coredns-7db6d8ff4d-wj8z7" [21d8cc35-f37a-42b6-9e44-dfce810d1d51] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "etcd-ha-586300" [a1809532-311c-4f80-9236-fec7256f7b3c] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "etcd-ha-586300-m02" [37b3bba9-35b3-4723-b954-94c4f45c9b96] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "etcd-ha-586300-m03" [1a637fcc-ab57-4fc2-be72-e925e46d8670] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "kindnet-59dc5" [c42f08e1-6016-4dc6-bf46-69571ccfabe8] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "kindnet-8hh55" [4fb9a98f-06d4-4333-89dc-b90c8b880f92] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "kindnet-vddtk" [bf6e57db-8270-4024-ba93-abce11d81513] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "kube-apiserver-ha-586300" [d6659d47-ce69-4334-a35c-7b66898b49de] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-apiserver-ha-586300-m02" [0b8839d5-3133-4d52-9264-9d998bc54617] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-apiserver-ha-586300-m03" [3c06b188-7d2a-4252-b636-54695945e26b] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-controller-manager-ha-586300" [3416887d-320b-4417-b6ba-ffabb7b84885] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-controller-manager-ha-586300-m02" [eccf51fc-16b7-4d89-95ab-59ec4e8fbc8c] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-controller-manager-ha-586300-m03" [5e5e1656-8c0a-403c-b8cb-34dc58314947] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-proxy-2tqlw" [6a4bf957-b55f-463f-aa7f-f2aa15b0f6fe] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-proxy-6mpjv" [0cd7eb37-2ff4-487e-b5e6-9d71c69a4814] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-proxy-77zxb" [bc2480b2-3de0-49c4-b84e-8ae7e85829a1] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-scheduler-ha-586300" [8bb322de-7dd8-4780-ae04-9d18a293aa0b] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-scheduler-ha-586300-m02" [c3bb6486-257a-4993-9127-34dada81473a] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-scheduler-ha-586300-m03" [7146ded0-67a1-42b0-898a-d603a3deb02f] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-vip-ha-586300" [5dfa662f-0df1-485a-a52b-fdcd87e23145] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-vip-ha-586300-m02" [4372ac88-49f7-4dcd-9c13-1b8484817d28] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-vip-ha-586300-m03" [7e267e8b-72f0-4f53-acf2-096f2535e1fe] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "storage-provisioner" [fc11360c-19a1-4d0b-966e-49946c8b0d47] Running
	I0513 23:04:52.491133   11992 system_pods.go:74] duration metric: took 176.5689ms to wait for pod list to return data ...
	I0513 23:04:52.491133   11992 default_sa.go:34] waiting for default service account to be created ...
	I0513 23:04:52.671249   11992 request.go:629] Waited for 180.1086ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/default/serviceaccounts
	I0513 23:04:52.671249   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/default/serviceaccounts
	I0513 23:04:52.671249   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.671249   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.671249   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.675408   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:52.675408   11992 default_sa.go:45] found service account: "default"
	I0513 23:04:52.675408   11992 default_sa.go:55] duration metric: took 184.2673ms for default service account to be created ...
	I0513 23:04:52.675408   11992 system_pods.go:116] waiting for k8s-apps to be running ...
	I0513 23:04:52.872985   11992 request.go:629] Waited for 197.5698ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:04:52.873190   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:04:52.873190   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.873190   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.873190   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.885965   11992 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0513 23:04:52.896305   11992 system_pods.go:86] 24 kube-system pods found
	I0513 23:04:52.896305   11992 system_pods.go:89] "coredns-7db6d8ff4d-4qbhd" [6fa6abce-1f7c-4119-b74c-e4e2275f77f4] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "coredns-7db6d8ff4d-wj8z7" [21d8cc35-f37a-42b6-9e44-dfce810d1d51] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "etcd-ha-586300" [a1809532-311c-4f80-9236-fec7256f7b3c] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "etcd-ha-586300-m02" [37b3bba9-35b3-4723-b954-94c4f45c9b96] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "etcd-ha-586300-m03" [1a637fcc-ab57-4fc2-be72-e925e46d8670] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kindnet-59dc5" [c42f08e1-6016-4dc6-bf46-69571ccfabe8] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kindnet-8hh55" [4fb9a98f-06d4-4333-89dc-b90c8b880f92] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kindnet-vddtk" [bf6e57db-8270-4024-ba93-abce11d81513] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-apiserver-ha-586300" [d6659d47-ce69-4334-a35c-7b66898b49de] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-apiserver-ha-586300-m02" [0b8839d5-3133-4d52-9264-9d998bc54617] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-apiserver-ha-586300-m03" [3c06b188-7d2a-4252-b636-54695945e26b] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-controller-manager-ha-586300" [3416887d-320b-4417-b6ba-ffabb7b84885] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-controller-manager-ha-586300-m02" [eccf51fc-16b7-4d89-95ab-59ec4e8fbc8c] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-controller-manager-ha-586300-m03" [5e5e1656-8c0a-403c-b8cb-34dc58314947] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-proxy-2tqlw" [6a4bf957-b55f-463f-aa7f-f2aa15b0f6fe] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-proxy-6mpjv" [0cd7eb37-2ff4-487e-b5e6-9d71c69a4814] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-proxy-77zxb" [bc2480b2-3de0-49c4-b84e-8ae7e85829a1] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-scheduler-ha-586300" [8bb322de-7dd8-4780-ae04-9d18a293aa0b] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-scheduler-ha-586300-m02" [c3bb6486-257a-4993-9127-34dada81473a] Running
	I0513 23:04:52.896861   11992 system_pods.go:89] "kube-scheduler-ha-586300-m03" [7146ded0-67a1-42b0-898a-d603a3deb02f] Running
	I0513 23:04:52.896861   11992 system_pods.go:89] "kube-vip-ha-586300" [5dfa662f-0df1-485a-a52b-fdcd87e23145] Running
	I0513 23:04:52.896861   11992 system_pods.go:89] "kube-vip-ha-586300-m02" [4372ac88-49f7-4dcd-9c13-1b8484817d28] Running
	I0513 23:04:52.896861   11992 system_pods.go:89] "kube-vip-ha-586300-m03" [7e267e8b-72f0-4f53-acf2-096f2535e1fe] Running
	I0513 23:04:52.896861   11992 system_pods.go:89] "storage-provisioner" [fc11360c-19a1-4d0b-966e-49946c8b0d47] Running
	I0513 23:04:52.896861   11992 system_pods.go:126] duration metric: took 221.4447ms to wait for k8s-apps to be running ...
	I0513 23:04:52.896861   11992 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 23:04:52.905761   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:04:52.932075   11992 system_svc.go:56] duration metric: took 35.213ms WaitForService to wait for kubelet
	I0513 23:04:52.932169   11992 kubeadm.go:576] duration metric: took 16.3319651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:04:52.932232   11992 node_conditions.go:102] verifying NodePressure condition ...
	I0513 23:04:53.061882   11992 request.go:629] Waited for 129.6449ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes
	I0513 23:04:53.062162   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes
	I0513 23:04:53.062162   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:53.062220   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:53.062220   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:53.067555   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:53.069829   11992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:04:53.069904   11992 node_conditions.go:123] node cpu capacity is 2
	I0513 23:04:53.069904   11992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:04:53.069988   11992 node_conditions.go:123] node cpu capacity is 2
	I0513 23:04:53.069988   11992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:04:53.069988   11992 node_conditions.go:123] node cpu capacity is 2
	I0513 23:04:53.069988   11992 node_conditions.go:105] duration metric: took 137.7513ms to run NodePressure ...
	I0513 23:04:53.070057   11992 start.go:240] waiting for startup goroutines ...
	I0513 23:04:53.070107   11992 start.go:254] writing updated cluster config ...
	I0513 23:04:53.079632   11992 ssh_runner.go:195] Run: rm -f paused
	I0513 23:04:53.197275   11992 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0513 23:04:53.200376   11992 out.go:177] * Done! kubectl is now configured to use "ha-586300" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 13 22:57:55 ha-586300 cri-dockerd[1228]: time="2024-05-13T22:57:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1dc60ff7d72473ab0684f88efded9a6c06b72fd2939e803ee49426c055808053/resolv.conf as [nameserver 172.23.96.1]"
	May 13 22:57:55 ha-586300 cri-dockerd[1228]: time="2024-05-13T22:57:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/60e4c610c1f0e8b5d4f1d96689e2e586ad95d91d4997167adbaa9bd619f47fb0/resolv.conf as [nameserver 172.23.96.1]"
	May 13 22:57:55 ha-586300 cri-dockerd[1228]: time="2024-05-13T22:57:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/660d74b20ca07c1448a8ec2841330781c87c8396ceac00a50f5c245d7005c802/resolv.conf as [nameserver 172.23.96.1]"
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.576692450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.579011546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.582243479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.582432086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.704386500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.704452603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.704471404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.704637711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.790142826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.793187551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.793277155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.793459463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:05:28 ha-586300 dockerd[1332]: time="2024-05-13T23:05:28.652843516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 23:05:28 ha-586300 dockerd[1332]: time="2024-05-13T23:05:28.652951620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 23:05:28 ha-586300 dockerd[1332]: time="2024-05-13T23:05:28.653003222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:05:28 ha-586300 dockerd[1332]: time="2024-05-13T23:05:28.653840451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:05:28 ha-586300 cri-dockerd[1228]: time="2024-05-13T23:05:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29c75c86289830befef480ac259a062919c9f686f010616e6d34666d63b01a71/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 13 23:05:30 ha-586300 cri-dockerd[1228]: time="2024-05-13T23:05:30Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 13 23:05:30 ha-586300 dockerd[1332]: time="2024-05-13T23:05:30.295391717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 23:05:30 ha-586300 dockerd[1332]: time="2024-05-13T23:05:30.295463222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 23:05:30 ha-586300 dockerd[1332]: time="2024-05-13T23:05:30.295480423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:05:30 ha-586300 dockerd[1332]: time="2024-05-13T23:05:30.295587130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	82b9cb93f81cb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   59 seconds ago      Running             busybox                   0                   29c75c8628983       busybox-fc5497c4f-v5w28
	3cca1819e1453       cbb01a7bd410d                                                                                         8 minutes ago       Running             coredns                   0                   660d74b20ca07       coredns-7db6d8ff4d-wj8z7
	0dd2364808abe       cbb01a7bd410d                                                                                         8 minutes ago       Running             coredns                   0                   60e4c610c1f0e       coredns-7db6d8ff4d-4qbhd
	a1cd86153923c       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       0                   1dc60ff7d7247       storage-provisioner
	2a50dd327cee4       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              8 minutes ago       Running             kindnet-cni               0                   3772bac758f7f       kindnet-8hh55
	76729111ccec0       a0bf559e280cf                                                                                         8 minutes ago       Running             kube-proxy                0                   865c3491222f4       kube-proxy-77zxb
	d7f2345199207       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     9 minutes ago       Running             kube-vip                  0                   b91c3c1e3ee58       kube-vip-ha-586300
	f4e45fa6a7ff1       259c8277fcbbc                                                                                         9 minutes ago       Running             kube-scheduler            0                   29cd2491a9da3       kube-scheduler-ha-586300
	5aa59ec7b3e08       c7aad43836fa5                                                                                         9 minutes ago       Running             kube-controller-manager   0                   fee036179772b       kube-controller-manager-ha-586300
	54d5259eb4fda       c42f13656d0b2                                                                                         9 minutes ago       Running             kube-apiserver            0                   1d8f3d2c1281e       kube-apiserver-ha-586300
	6f280a956ea0d       3861cfcd7c04c                                                                                         9 minutes ago       Running             etcd                      0                   97eb70a28a452       etcd-ha-586300
	
	
	==> coredns [0dd2364808ab] <==
	[INFO] 10.244.1.2:43544 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.031890751s
	[INFO] 10.244.2.2:36236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229916s
	[INFO] 10.244.2.2:44456 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.136841212s
	[INFO] 10.244.0.4:47020 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110607s
	[INFO] 10.244.0.4:55740 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.089271972s
	[INFO] 10.244.1.2:39460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093206s
	[INFO] 10.244.1.2:33929 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192913s
	[INFO] 10.244.1.2:55027 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028391608s
	[INFO] 10.244.1.2:42290 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115608s
	[INFO] 10.244.1.2:60562 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105007s
	[INFO] 10.244.2.2:42343 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00029182s
	[INFO] 10.244.2.2:36425 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023484878s
	[INFO] 10.244.2.2:41351 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000345323s
	[INFO] 10.244.2.2:47550 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157811s
	[INFO] 10.244.0.4:44658 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000242316s
	[INFO] 10.244.0.4:45569 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159411s
	[INFO] 10.244.0.4:45724 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059004s
	[INFO] 10.244.0.4:48470 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014951s
	[INFO] 10.244.1.2:59764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015371s
	[INFO] 10.244.2.2:49551 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184513s
	[INFO] 10.244.0.4:37570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114208s
	[INFO] 10.244.0.4:46088 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076105s
	[INFO] 10.244.1.2:34919 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129208s
	[INFO] 10.244.1.2:33254 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115208s
	[INFO] 10.244.2.2:54967 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140109s
	
	
	==> coredns [3cca1819e145] <==
	[INFO] 10.244.2.2:55407 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088406s
	[INFO] 10.244.2.2:45280 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123108s
	[INFO] 10.244.2.2:43201 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015161s
	[INFO] 10.244.2.2:56254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051904s
	[INFO] 10.244.0.4:49781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103507s
	[INFO] 10.244.0.4:37159 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000161311s
	[INFO] 10.244.0.4:42140 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012681351s
	[INFO] 10.244.0.4:36016 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141609s
	[INFO] 10.244.1.2:47054 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190913s
	[INFO] 10.244.1.2:33317 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155411s
	[INFO] 10.244.1.2:38499 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014551s
	[INFO] 10.244.2.2:42977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093907s
	[INFO] 10.244.2.2:40377 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127008s
	[INFO] 10.244.2.2:51922 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056604s
	[INFO] 10.244.0.4:41218 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015181s
	[INFO] 10.244.0.4:47098 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131609s
	[INFO] 10.244.1.2:51316 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122708s
	[INFO] 10.244.1.2:54718 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000117108s
	[INFO] 10.244.2.2:53578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089606s
	[INFO] 10.244.2.2:55549 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103207s
	[INFO] 10.244.2.2:53562 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000052303s
	[INFO] 10.244.0.4:60896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205013s
	[INFO] 10.244.0.4:34122 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000115108s
	[INFO] 10.244.0.4:48727 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146909s
	[INFO] 10.244.0.4:47037 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000052804s
	
	
	==> describe nodes <==
	Name:               ha-586300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-586300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=ha-586300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_13T22_57_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 22:57:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-586300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 23:06:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 23:05:59 +0000   Mon, 13 May 2024 22:57:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 23:05:59 +0000   Mon, 13 May 2024 22:57:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 23:05:59 +0000   Mon, 13 May 2024 22:57:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 23:05:59 +0000   Mon, 13 May 2024 22:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.102.229
	  Hostname:    ha-586300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 766fa47b08fc4cd186a4572970ac1cb6
	  System UUID:                cdb7f6e8-e965-6c40-80b5-9bdc5dedc2be
	  Boot ID:                    3912f1b6-ba39-4062-bb61-a816e1502cb2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v5w28              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 coredns-7db6d8ff4d-4qbhd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m47s
	  kube-system                 coredns-7db6d8ff4d-wj8z7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m47s
	  kube-system                 etcd-ha-586300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m1s
	  kube-system                 kindnet-8hh55                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m47s
	  kube-system                 kube-apiserver-ha-586300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-controller-manager-ha-586300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-proxy-77zxb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m47s
	  kube-system                 kube-scheduler-ha-586300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-vip-ha-586300                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m45s  kube-proxy       
	  Normal  Starting                 9m1s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m1s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m1s   kubelet          Node ha-586300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m1s   kubelet          Node ha-586300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m1s   kubelet          Node ha-586300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m48s  node-controller  Node ha-586300 event: Registered Node ha-586300 in Controller
	  Normal  NodeReady                8m35s  kubelet          Node ha-586300 status is now: NodeReady
	  Normal  RegisteredNode           5m11s  node-controller  Node ha-586300 event: Registered Node ha-586300 in Controller
	  Normal  RegisteredNode           99s    node-controller  Node ha-586300 event: Registered Node ha-586300 in Controller
	
	
	Name:               ha-586300-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-586300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=ha-586300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_13T23_01_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 23:00:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-586300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 23:06:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 23:06:04 +0000   Mon, 13 May 2024 23:00:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 23:06:04 +0000   Mon, 13 May 2024 23:00:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 23:06:04 +0000   Mon, 13 May 2024 23:00:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 23:06:04 +0000   Mon, 13 May 2024 23:01:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.108.68
	  Hostname:    ha-586300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 1bab7af2394349258bef727483df9097
	  System UUID:                805a87ba-4250-134c-ae99-e6f53ab0643b
	  Boot ID:                    e372f98e-968c-4b2e-8e3d-f9946c6e7b53
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hd72c                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 etcd-ha-586300-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m31s
	  kube-system                 kindnet-vddtk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m32s
	  kube-system                 kube-apiserver-ha-586300-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 kube-controller-manager-ha-586300-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 kube-proxy-6mpjv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kube-system                 kube-scheduler-ha-586300-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 kube-vip-ha-586300-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m32s (x8 over 5m32s)  kubelet          Node ha-586300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m32s (x8 over 5m32s)  kubelet          Node ha-586300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m32s (x7 over 5m32s)  kubelet          Node ha-586300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-586300-m02 event: Registered Node ha-586300-m02 in Controller
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-586300-m02 event: Registered Node ha-586300-m02 in Controller
	  Normal  RegisteredNode           99s                    node-controller  Node ha-586300-m02 event: Registered Node ha-586300-m02 in Controller
	
	
	Name:               ha-586300-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-586300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=ha-586300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_13T23_04_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 23:04:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-586300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 23:06:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 23:06:01 +0000   Mon, 13 May 2024 23:04:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 23:06:01 +0000   Mon, 13 May 2024 23:04:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 23:06:01 +0000   Mon, 13 May 2024 23:04:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 23:06:01 +0000   Mon, 13 May 2024 23:04:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.109.129
	  Hostname:    ha-586300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3af176932f04986b129edbdfe6ef66e
	  System UUID:                0ab3db21-b362-594f-971e-39a38f19c4b7
	  Boot ID:                    49acf628-75bd-4969-9049-a08500a01e57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-njj9r                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 etcd-ha-586300-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kindnet-59dc5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m1s
	  kube-system                 kube-apiserver-ha-586300-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-ha-586300-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-proxy-2tqlw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 kube-scheduler-ha-586300-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-vip-ha-586300-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 115s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node ha-586300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node ha-586300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node ha-586300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           118s                 node-controller  Node ha-586300-m03 event: Registered Node ha-586300-m03 in Controller
	  Normal  RegisteredNode           116s                 node-controller  Node ha-586300-m03 event: Registered Node ha-586300-m03 in Controller
	  Normal  RegisteredNode           99s                  node-controller  Node ha-586300-m03 event: Registered Node ha-586300-m03 in Controller
	
	
	==> dmesg <==
	[  +7.077887] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May13 22:56] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.165597] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +27.686009] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.076728] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.483758] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.175285] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.200234] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.728981] systemd-fstab-generator[1181]: Ignoring "noauto" option for root device
	[  +0.167405] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.168870] systemd-fstab-generator[1205]: Ignoring "noauto" option for root device
	[  +0.249018] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[May13 22:57] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.089588] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.722623] systemd-fstab-generator[1524]: Ignoring "noauto" option for root device
	[  +5.461929] systemd-fstab-generator[1714]: Ignoring "noauto" option for root device
	[  +0.098349] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.516983] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[  +0.118307] kauditd_printk_skb: 72 callbacks suppressed
	[ +14.758608] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.266877] kauditd_printk_skb: 29 callbacks suppressed
	[May13 23:00] hrtimer: interrupt took 2839798 ns
	[May13 23:01] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [6f280a956ea0] <==
	{"level":"info","ts":"2024-05-13T23:04:32.172126Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e433e3e9aac3d2bb","to":"d2840daa6738a193","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-13T23:04:32.172235Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"e433e3e9aac3d2bb","remote-peer-id":"d2840daa6738a193"}
	{"level":"warn","ts":"2024-05-13T23:04:32.9822Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"d2840daa6738a193","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-05-13T23:04:33.981903Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"d2840daa6738a193","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-05-13T23:04:34.039685Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"d2840daa6738a193","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"139.174618ms"}
	{"level":"warn","ts":"2024-05-13T23:04:34.039765Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"81e76dc494655f61","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"139.25922ms"}
	{"level":"info","ts":"2024-05-13T23:04:34.191721Z","caller":"traceutil/trace.go:171","msg":"trace[767979318] transaction","detail":"{read_only:false; number_of_response:0; response_revision:1389; }","duration":"486.444206ms","start":"2024-05-13T23:04:33.705261Z","end":"2024-05-13T23:04:34.191705Z","steps":["trace[767979318] 'process raft request'  (duration: 486.387704ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:04:34.191807Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:04:33.705245Z","time spent":"486.519209ms","remote":"127.0.0.1:49052","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-ha-586300-m03\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-ha-586300-m03\" value_size:4112 >> failure:<>"}
	{"level":"info","ts":"2024-05-13T23:04:34.191961Z","caller":"traceutil/trace.go:171","msg":"trace[685114368] linearizableReadLoop","detail":"{readStateIndex:1552; appliedIndex:1553; }","duration":"477.87071ms","start":"2024-05-13T23:04:33.714076Z","end":"2024-05-13T23:04:34.191947Z","steps":["trace[685114368] 'read index received'  (duration: 477.829709ms)","trace[685114368] 'applied index is now lower than readState.Index'  (duration: 36.901µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-13T23:04:34.192154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"478.063517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-13T23:04:34.192236Z","caller":"traceutil/trace.go:171","msg":"trace[630437217] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1389; }","duration":"478.128919ms","start":"2024-05-13T23:04:33.714047Z","end":"2024-05-13T23:04:34.192176Z","steps":["trace[630437217] 'agreement among raft nodes before linearized reading'  (duration: 478.063917ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:04:34.192275Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:04:33.714033Z","time spent":"478.231422ms","remote":"127.0.0.1:48868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-13T23:04:34.205005Z","caller":"traceutil/trace.go:171","msg":"trace[440005508] transaction","detail":"{read_only:false; response_revision:1390; number_of_response:1; }","duration":"489.829323ms","start":"2024-05-13T23:04:33.715165Z","end":"2024-05-13T23:04:34.204994Z","steps":["trace[440005508] 'process raft request'  (duration: 489.565614ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:04:34.205174Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-13T23:04:33.715153Z","time spent":"489.974828ms","remote":"127.0.0.1:49052","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4709,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-2tqlw\" mod_revision:1374 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-2tqlw\" value_size:4658 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-2tqlw\" > >"}
	{"level":"warn","ts":"2024-05-13T23:04:34.212765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.394123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-05-13T23:04:34.212837Z","caller":"traceutil/trace.go:171","msg":"trace[439211649] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1392; }","duration":"200.489527ms","start":"2024-05-13T23:04:34.012338Z","end":"2024-05-13T23:04:34.212828Z","steps":["trace[439211649] 'agreement among raft nodes before linearized reading'  (duration: 200.359322ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:04:34.213122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.924885ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-05-13T23:04:34.213166Z","caller":"traceutil/trace.go:171","msg":"trace[704144195] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1392; }","duration":"236.999688ms","start":"2024-05-13T23:04:33.976158Z","end":"2024-05-13T23:04:34.213158Z","steps":["trace[704144195] 'agreement among raft nodes before linearized reading'  (duration: 236.832882ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:04:35.499341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e433e3e9aac3d2bb switched to configuration voters=(9360571041583554401 15169264470418039187 16443737257191658171)"}
	{"level":"info","ts":"2024-05-13T23:04:35.500047Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"36504b5af51034a4","local-member-id":"e433e3e9aac3d2bb"}
	{"level":"info","ts":"2024-05-13T23:04:35.500144Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"e433e3e9aac3d2bb","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"d2840daa6738a193"}
	{"level":"warn","ts":"2024-05-13T23:04:39.631457Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"d2840daa6738a193","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"85.783274ms"}
	{"level":"warn","ts":"2024-05-13T23:04:39.631737Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"81e76dc494655f61","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"86.061983ms"}
	{"level":"info","ts":"2024-05-13T23:04:39.631774Z","caller":"traceutil/trace.go:171","msg":"trace[1415216423] transaction","detail":"{read_only:false; response_revision:1450; number_of_response:1; }","duration":"283.546696ms","start":"2024-05-13T23:04:39.348212Z","end":"2024-05-13T23:04:39.631759Z","steps":["trace[1415216423] 'process raft request'  (duration: 283.336589ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:05:29.487951Z","caller":"traceutil/trace.go:171","msg":"trace[1785179741] transaction","detail":"{read_only:false; response_revision:1691; number_of_response:1; }","duration":"124.133794ms","start":"2024-05-13T23:05:29.363722Z","end":"2024-05-13T23:05:29.487856Z","steps":["trace[1785179741] 'process raft request'  (duration: 123.934181ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:06:29 up 11 min,  0 users,  load average: 0.78, 0.61, 0.38
	Linux ha-586300 5.10.207 #1 SMP Thu May 9 02:07:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2a50dd327cee] <==
	I0513 23:05:40.645897       1 main.go:250] Node ha-586300-m03 has CIDR [10.244.2.0/24] 
	I0513 23:05:50.656577       1 main.go:223] Handling node with IPs: map[172.23.102.229:{}]
	I0513 23:05:50.656615       1 main.go:227] handling current node
	I0513 23:05:50.656626       1 main.go:223] Handling node with IPs: map[172.23.108.68:{}]
	I0513 23:05:50.656634       1 main.go:250] Node ha-586300-m02 has CIDR [10.244.1.0/24] 
	I0513 23:05:50.656980       1 main.go:223] Handling node with IPs: map[172.23.109.129:{}]
	I0513 23:05:50.657079       1 main.go:250] Node ha-586300-m03 has CIDR [10.244.2.0/24] 
	I0513 23:06:00.670998       1 main.go:223] Handling node with IPs: map[172.23.102.229:{}]
	I0513 23:06:00.671026       1 main.go:227] handling current node
	I0513 23:06:00.671036       1 main.go:223] Handling node with IPs: map[172.23.108.68:{}]
	I0513 23:06:00.671042       1 main.go:250] Node ha-586300-m02 has CIDR [10.244.1.0/24] 
	I0513 23:06:00.671864       1 main.go:223] Handling node with IPs: map[172.23.109.129:{}]
	I0513 23:06:00.671895       1 main.go:250] Node ha-586300-m03 has CIDR [10.244.2.0/24] 
	I0513 23:06:10.687763       1 main.go:223] Handling node with IPs: map[172.23.102.229:{}]
	I0513 23:06:10.687815       1 main.go:227] handling current node
	I0513 23:06:10.687829       1 main.go:223] Handling node with IPs: map[172.23.108.68:{}]
	I0513 23:06:10.687837       1 main.go:250] Node ha-586300-m02 has CIDR [10.244.1.0/24] 
	I0513 23:06:10.688010       1 main.go:223] Handling node with IPs: map[172.23.109.129:{}]
	I0513 23:06:10.688078       1 main.go:250] Node ha-586300-m03 has CIDR [10.244.2.0/24] 
	I0513 23:06:20.695007       1 main.go:223] Handling node with IPs: map[172.23.102.229:{}]
	I0513 23:06:20.695123       1 main.go:227] handling current node
	I0513 23:06:20.695136       1 main.go:223] Handling node with IPs: map[172.23.108.68:{}]
	I0513 23:06:20.695144       1 main.go:250] Node ha-586300-m02 has CIDR [10.244.1.0/24] 
	I0513 23:06:20.696072       1 main.go:223] Handling node with IPs: map[172.23.109.129:{}]
	I0513 23:06:20.696160       1 main.go:250] Node ha-586300-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [54d5259eb4fd] <==
	Trace[345524087]:  ---"Txn call completed" 563ms (23:04:24.832)]
	Trace[345524087]: [564.454801ms] [564.454801ms] END
	E0513 23:04:29.695201       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0513 23:04:29.696166       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0513 23:04:29.695308       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 6.701µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0513 23:04:29.698981       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0513 23:04:29.699576       1 timeout.go:142] post-timeout activity - time-elapsed: 4.36425ms, PATCH "/api/v1/namespaces/default/events/ha-586300-m03.17cf2ed374cd56f7" result: <nil>
	I0513 23:04:34.217208       1 trace.go:236] Trace[1504576256]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:011c1ed6-a0a7-481b-bc77-dc1b7b0a83e3,client:172.23.109.129,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:POST (13-May-2024 23:04:33.702) (total time: 515ms):
	Trace[1504576256]: ---"Write to database call failed" len:2257,err:pods "etcd-ha-586300-m03" already exists 24ms (23:04:34.216)
	Trace[1504576256]: [515.098696ms] [515.098696ms] END
	E0513 23:05:33.827700       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51081: use of closed network connection
	E0513 23:05:34.383158       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51084: use of closed network connection
	E0513 23:05:35.902101       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51086: use of closed network connection
	E0513 23:05:36.390860       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51088: use of closed network connection
	E0513 23:05:36.861383       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51090: use of closed network connection
	E0513 23:05:37.285810       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51092: use of closed network connection
	E0513 23:05:37.706353       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51094: use of closed network connection
	E0513 23:05:38.133855       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51096: use of closed network connection
	E0513 23:05:38.538630       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51098: use of closed network connection
	E0513 23:05:39.310122       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51101: use of closed network connection
	E0513 23:05:49.710067       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51103: use of closed network connection
	E0513 23:05:50.119886       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51107: use of closed network connection
	E0513 23:06:00.528565       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51109: use of closed network connection
	E0513 23:06:00.961250       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51111: use of closed network connection
	E0513 23:06:11.386582       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51113: use of closed network connection
	
	
	==> kube-controller-manager [5aa59ec7b3e0] <==
	I0513 22:57:56.602763       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.902µs"
	I0513 23:00:57.731261       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-586300-m02\" does not exist"
	I0513 23:00:57.772518       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-586300-m02" podCIDRs=["10.244.1.0/24"]
	I0513 23:01:01.501109       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-586300-m02"
	I0513 23:04:28.844095       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-586300-m03\" does not exist"
	I0513 23:04:28.910130       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-586300-m03" podCIDRs=["10.244.2.0/24"]
	I0513 23:04:31.567115       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-586300-m03"
	I0513 23:05:26.956559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="156.407703ms"
	I0513 23:05:27.002526       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.900786ms"
	I0513 23:05:27.204905       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="202.305789ms"
	I0513 23:05:27.511237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="306.241279ms"
	E0513 23:05:27.511452       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0513 23:05:27.511516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.701µs"
	I0513 23:05:27.518491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.103µs"
	I0513 23:05:27.604260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.059604ms"
	I0513 23:05:27.604690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.502µs"
	I0513 23:05:27.689481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.303µs"
	I0513 23:05:28.499804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.302µs"
	I0513 23:05:28.789448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.903µs"
	I0513 23:05:29.919071       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.94439ms"
	I0513 23:05:29.919151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.103µs"
	I0513 23:05:30.537352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.510093ms"
	I0513 23:05:30.537999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.806µs"
	I0513 23:05:30.965455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.803635ms"
	I0513 23:05:30.965698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.504µs"
	
	
	==> kube-proxy [76729111ccec] <==
	I0513 22:57:43.581221       1 server_linux.go:69] "Using iptables proxy"
	I0513 22:57:43.609494       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.229"]
	I0513 22:57:43.668028       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0513 22:57:43.668180       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0513 22:57:43.668241       1 server_linux.go:165] "Using iptables Proxier"
	I0513 22:57:43.672519       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 22:57:43.672891       1 server.go:872] "Version info" version="v1.30.0"
	I0513 22:57:43.673286       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 22:57:43.677378       1 config.go:192] "Starting service config controller"
	I0513 22:57:43.678115       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 22:57:43.678268       1 config.go:101] "Starting endpoint slice config controller"
	I0513 22:57:43.678472       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 22:57:43.680090       1 config.go:319] "Starting node config controller"
	I0513 22:57:43.684681       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 22:57:43.779333       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0513 22:57:43.779388       1 shared_informer.go:320] Caches are synced for service config
	I0513 22:57:43.789057       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f4e45fa6a7ff] <==
	W0513 22:57:26.091329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0513 22:57:26.091613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0513 22:57:26.175961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0513 22:57:26.176285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0513 22:57:26.360733       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0513 22:57:26.360764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0513 22:57:26.360805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0513 22:57:26.360819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0513 22:57:26.434970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0513 22:57:26.435739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0513 22:57:26.455981       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 22:57:26.456169       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 22:57:26.507102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0513 22:57:26.507320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0513 22:57:26.570578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0513 22:57:26.570736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0513 22:57:26.637206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 22:57:26.637250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0513 22:57:26.682315       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 22:57:26.682358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0513 22:57:26.689206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0513 22:57:26.689296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0513 22:57:26.812562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 22:57:26.812868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0513 22:57:28.939739       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 13 23:03:28 ha-586300 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:03:28 ha-586300 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:03:28 ha-586300 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:03:28 ha-586300 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 23:04:28 ha-586300 kubelet[2217]: E0513 23:04:28.546173    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:04:28 ha-586300 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:04:28 ha-586300 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:04:28 ha-586300 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:04:28 ha-586300 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 23:05:26 ha-586300 kubelet[2217]: I0513 23:05:26.909353    2217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4qbhd" podStartSLOduration=464.909330987 podStartE2EDuration="7m44.909330987s" podCreationTimestamp="2024-05-13 22:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-13 22:57:56.58757889 +0000 UTC m=+28.279061805" watchObservedRunningTime="2024-05-13 23:05:26.909330987 +0000 UTC m=+478.600814002"
	May 13 23:05:26 ha-586300 kubelet[2217]: I0513 23:05:26.909891    2217 topology_manager.go:215] "Topology Admit Handler" podUID="45c7cfe3-d30b-454f-b606-0c78ca92f731" podNamespace="default" podName="busybox-fc5497c4f-v5w28"
	May 13 23:05:26 ha-586300 kubelet[2217]: W0513 23:05:26.923783    2217 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-586300" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-586300' and this object
	May 13 23:05:26 ha-586300 kubelet[2217]: E0513 23:05:26.924135    2217 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-586300" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-586300' and this object
	May 13 23:05:27 ha-586300 kubelet[2217]: I0513 23:05:27.048758    2217 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczbq\" (UniqueName: \"kubernetes.io/projected/45c7cfe3-d30b-454f-b606-0c78ca92f731-kube-api-access-vczbq\") pod \"busybox-fc5497c4f-v5w28\" (UID: \"45c7cfe3-d30b-454f-b606-0c78ca92f731\") " pod="default/busybox-fc5497c4f-v5w28"
	May 13 23:05:28 ha-586300 kubelet[2217]: E0513 23:05:28.547809    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:05:28 ha-586300 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:05:28 ha-586300 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:05:28 ha-586300 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:05:28 ha-586300 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 23:05:28 ha-586300 kubelet[2217]: I0513 23:05:28.877847    2217 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29c75c86289830befef480ac259a062919c9f686f010616e6d34666d63b01a71"
	May 13 23:06:28 ha-586300 kubelet[2217]: E0513 23:06:28.547206    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:06:28 ha-586300 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:06:28 ha-586300 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:06:28 ha-586300 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:06:28 ha-586300 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:06:22.038545    3100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-586300 -n ha-586300
E0513 23:06:35.981255    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-586300 -n ha-586300: (10.7675377s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-586300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (63.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (259.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 node start m02 -v=7 --alsologtostderr
E0513 23:22:50.650868    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 23:23:16.041079    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 23:23:32.843531    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 node start m02 -v=7 --alsologtostderr: exit status 1 (3m7.4801026s)

                                                
                                                
-- stdout --
	* Starting "ha-586300-m02" control-plane node in "ha-586300" cluster
	* Restarting existing hyperv VM for "ha-586300-m02" ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	* Verifying Kubernetes components...
	* Enabled addons: 

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:21:32.583862    6944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 23:21:32.642934    6944 out.go:291] Setting OutFile to fd 920 ...
	I0513 23:21:32.660926    6944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:21:32.660996    6944 out.go:304] Setting ErrFile to fd 840...
	I0513 23:21:32.660996    6944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:21:32.673466    6944 mustload.go:65] Loading cluster: ha-586300
	I0513 23:21:32.673976    6944 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:21:32.674686    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:21:34.607542    6944 main.go:141] libmachine: [stdout =====>] : Off
	
	I0513 23:21:34.607605    6944 main.go:141] libmachine: [stderr =====>] : 
	W0513 23:21:34.607686    6944 host.go:58] "ha-586300-m02" host status: Stopped
	I0513 23:21:34.611377    6944 out.go:177] * Starting "ha-586300-m02" control-plane node in "ha-586300" cluster
	I0513 23:21:34.614135    6944 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:21:34.614344    6944 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 23:21:34.614344    6944 cache.go:56] Caching tarball of preloaded images
	I0513 23:21:34.614934    6944 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 23:21:34.615191    6944 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 23:21:34.615239    6944 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 23:21:34.617914    6944 start.go:360] acquireMachinesLock for ha-586300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 23:21:34.618137    6944 start.go:364] duration metric: took 110.8µs to acquireMachinesLock for "ha-586300-m02"
	I0513 23:21:34.618397    6944 start.go:96] Skipping create...Using existing machine configuration
	I0513 23:21:34.618470    6944 fix.go:54] fixHost starting: m02
	I0513 23:21:34.618925    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:21:36.566193    6944 main.go:141] libmachine: [stdout =====>] : Off
	
	I0513 23:21:36.566193    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:36.566193    6944 fix.go:112] recreateIfNeeded on ha-586300-m02: state=Stopped err=<nil>
	W0513 23:21:36.566193    6944 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 23:21:36.568836    6944 out.go:177] * Restarting existing hyperv VM for "ha-586300-m02" ...
	I0513 23:21:36.571599    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-586300-m02
	I0513 23:21:39.403751    6944 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:21:39.403813    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:39.403813    6944 main.go:141] libmachine: Waiting for host to start...
	I0513 23:21:39.403813    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:21:41.442202    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:21:41.442202    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:41.442202    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:21:43.686297    6944 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:21:43.687296    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:44.694318    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:21:46.665250    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:21:46.665958    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:46.666038    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:21:48.958964    6944 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:21:48.959287    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:49.972718    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:21:51.933500    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:21:51.933563    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:51.933618    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:21:54.229504    6944 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:21:54.230449    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:55.239035    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:21:57.210445    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:21:57.210445    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:57.210445    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:21:59.468805    6944 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:21:59.469481    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:00.484489    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:02.508073    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:02.508073    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:02.508073    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:04.860530    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:04.860530    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:04.862473    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:06.792688    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:06.792961    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:06.792961    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:09.080242    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:09.080242    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:09.080800    6944 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 23:22:09.082231    6944 machine.go:94] provisionDockerMachine start ...
	I0513 23:22:09.082763    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:11.013369    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:11.013369    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:11.014175    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:13.298624    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:13.298624    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:13.302922    6944 main.go:141] libmachine: Using SSH client type: native
	I0513 23:22:13.303445    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
	I0513 23:22:13.303445    6944 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 23:22:13.428566    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 23:22:13.428566    6944 buildroot.go:166] provisioning hostname "ha-586300-m02"
	I0513 23:22:13.428666    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:15.391479    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:15.391479    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:15.391479    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:17.692402    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:17.692402    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:17.696753    6944 main.go:141] libmachine: Using SSH client type: native
	I0513 23:22:17.697219    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
	I0513 23:22:17.697219    6944 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-586300-m02 && echo "ha-586300-m02" | sudo tee /etc/hostname
	I0513 23:22:17.850301    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-586300-m02
	
	I0513 23:22:17.850301    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:19.769596    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:19.769596    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:19.770448    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:22.045897    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:22.046039    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:22.050710    6944 main.go:141] libmachine: Using SSH client type: native
	I0513 23:22:22.051381    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
	I0513 23:22:22.051381    6944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-586300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-586300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-586300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 23:22:22.209175    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 23:22:22.209175    6944 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 23:22:22.209175    6944 buildroot.go:174] setting up certificates
	I0513 23:22:22.209175    6944 provision.go:84] configureAuth start
	I0513 23:22:22.209175    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:24.165903    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:24.165903    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:24.165903    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:26.531962    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:26.531962    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:26.531962    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:28.496886    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:28.497180    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:28.497180    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:30.792371    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:30.792829    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:30.792881    6944 provision.go:143] copyHostCerts
	I0513 23:22:30.793161    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0513 23:22:30.793542    6944 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0513 23:22:30.793595    6944 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0513 23:22:30.793979    6944 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 23:22:30.795075    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0513 23:22:30.795368    6944 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0513 23:22:30.795428    6944 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0513 23:22:30.795919    6944 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 23:22:30.796860    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0513 23:22:30.796860    6944 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0513 23:22:30.796860    6944 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0513 23:22:30.797412    6944 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 23:22:30.798474    6944 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-586300-m02 san=[127.0.0.1 172.23.108.85 ha-586300-m02 localhost minikube]
	I0513 23:22:31.162932    6944 provision.go:177] copyRemoteCerts
	I0513 23:22:31.170929    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 23:22:31.170929    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:33.098758    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:33.098758    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:33.099869    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:35.394202    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:35.394202    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:35.394202    6944 sshutil.go:53] new ssh client: &{IP:172.23.108.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 23:22:35.499367    6944 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3282288s)
	I0513 23:22:35.499367    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 23:22:35.499367    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 23:22:35.544188    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 23:22:35.544188    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0513 23:22:35.586773    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 23:22:35.586773    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 23:22:35.629894    6944 provision.go:87] duration metric: took 13.4200766s to configureAuth
	I0513 23:22:35.629894    6944 buildroot.go:189] setting minikube options for container-runtime
	I0513 23:22:35.630503    6944 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:22:35.630503    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:37.569285    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:37.569964    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:37.570049    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:39.860841    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:39.861047    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:39.865045    6944 main.go:141] libmachine: Using SSH client type: native
	I0513 23:22:39.865045    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
	I0513 23:22:39.865045    6944 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 23:22:39.995769    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 23:22:39.995769    6944 buildroot.go:70] root file system type: tmpfs
	I0513 23:22:39.995769    6944 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 23:22:39.995769    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:41.928614    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:41.928614    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:41.928614    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:44.213772    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:44.213977    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:44.218709    6944 main.go:141] libmachine: Using SSH client type: native
	I0513 23:22:44.219176    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
	I0513 23:22:44.219320    6944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 23:22:44.375539    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 23:22:44.375688    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:46.291242    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:46.291242    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:46.291242    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:48.591482    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:48.591482    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:48.596164    6944 main.go:141] libmachine: Using SSH client type: native
	I0513 23:22:48.596560    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
	I0513 23:22:48.596560    6944 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 23:22:50.983510    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 23:22:50.983510    6944 machine.go:97] duration metric: took 41.8992695s to provisionDockerMachine
	I0513 23:22:50.983510    6944 start.go:293] postStartSetup for "ha-586300-m02" (driver="hyperv")
	I0513 23:22:50.983510    6944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 23:22:50.992795    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 23:22:50.992795    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:52.900212    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:52.900212    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:52.900212    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:55.189460    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:55.189817    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:55.190260    6944 sshutil.go:53] new ssh client: &{IP:172.23.108.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 23:22:55.299965    6944 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3069598s)
	I0513 23:22:55.308215    6944 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 23:22:55.314529    6944 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 23:22:55.314529    6944 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 23:22:55.315348    6944 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 23:22:55.316137    6944 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0513 23:22:55.316137    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0513 23:22:55.332411    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 23:22:55.354327    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0513 23:22:55.400417    6944 start.go:296] duration metric: took 4.4166904s for postStartSetup
	I0513 23:22:55.408422    6944 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0513 23:22:55.408422    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:22:57.361431    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:22:57.362067    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:57.362144    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:22:59.655081    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:22:59.655081    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:22:59.655081    6944 sshutil.go:53] new ssh client: &{IP:172.23.108.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 23:22:59.765198    6944 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.3565625s)
	I0513 23:22:59.765289    6944 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0513 23:22:59.774041    6944 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0513 23:22:59.846155    6944 fix.go:56] duration metric: took 1m25.2236074s for fixHost
	I0513 23:22:59.846234    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:23:01.780207    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:23:01.780928    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:01.781006    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:23:04.084412    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:23:04.084487    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:04.088516    6944 main.go:141] libmachine: Using SSH client type: native
	I0513 23:23:04.088635    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
	I0513 23:23:04.088635    6944 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0513 23:23:04.222779    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715642584.424244294
	
	I0513 23:23:04.222927    6944 fix.go:216] guest clock: 1715642584.424244294
	I0513 23:23:04.222927    6944 fix.go:229] Guest: 2024-05-13 23:23:04.424244294 +0000 UTC Remote: 2024-05-13 23:22:59.8461551 +0000 UTC m=+87.324792101 (delta=4.578089194s)
	I0513 23:23:04.223074    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:23:06.141950    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:23:06.141950    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:06.142029    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:23:08.438265    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:23:08.438265    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:08.443302    6944 main.go:141] libmachine: Using SSH client type: native
	I0513 23:23:08.443677    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
	I0513 23:23:08.443740    6944 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715642584
	I0513 23:23:08.586595    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 23:23:04 UTC 2024
	
	I0513 23:23:08.586595    6944 fix.go:236] clock set: Mon May 13 23:23:04 UTC 2024
	 (err=<nil>)
	I0513 23:23:08.586595    6944 start.go:83] releasing machines lock for "ha-586300-m02", held for 1m33.9638725s
	I0513 23:23:08.586595    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:23:10.546470    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:23:10.546665    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:10.546736    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:23:12.858175    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:23:12.858175    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:12.861354    6944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 23:23:12.861521    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:23:12.868785    6944 ssh_runner.go:195] Run: systemctl --version
	I0513 23:23:12.868785    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:23:14.859029    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:23:14.859029    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:14.859029    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:23:14.859029    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:14.859029    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:23:14.859029    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:23:17.221423    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:23:17.221498    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:17.221777    6944 sshutil.go:53] new ssh client: &{IP:172.23.108.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 23:23:17.243799    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85
	
	I0513 23:23:17.244227    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:17.244607    6944 sshutil.go:53] new ssh client: &{IP:172.23.108.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 23:23:17.504877    6944 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6432936s)
	I0513 23:23:17.504877    6944 ssh_runner.go:235] Completed: systemctl --version: (4.6358638s)
	I0513 23:23:17.514250    6944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0513 23:23:17.523632    6944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 23:23:17.531615    6944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 23:23:17.561152    6944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 23:23:17.561268    6944 start.go:494] detecting cgroup driver to use...
	I0513 23:23:17.561448    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:23:17.606870    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 23:23:17.635064    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 23:23:17.656915    6944 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 23:23:17.665084    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 23:23:17.691845    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:23:17.720365    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 23:23:17.746369    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:23:17.772352    6944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 23:23:17.804234    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 23:23:17.829852    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 23:23:17.856646    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 23:23:17.887135    6944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 23:23:17.914355    6944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 23:23:17.939359    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:23:18.125592    6944 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 23:23:18.154645    6944 start.go:494] detecting cgroup driver to use...
	I0513 23:23:18.163449    6944 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 23:23:18.197871    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:23:18.227301    6944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 23:23:18.262278    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:23:18.293704    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:23:18.323116    6944 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 23:23:18.380426    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:23:18.404387    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:23:18.446916    6944 ssh_runner.go:195] Run: which cri-dockerd
	I0513 23:23:18.460361    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 23:23:18.478079    6944 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 23:23:18.520509    6944 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 23:23:18.732275    6944 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 23:23:18.902992    6944 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 23:23:18.902992    6944 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 23:23:18.940864    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:23:19.140925    6944 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 23:23:21.768578    6944 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6274277s)
	I0513 23:23:21.777553    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 23:23:21.809596    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:23:21.840679    6944 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 23:23:22.038963    6944 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 23:23:22.240558    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:23:22.439980    6944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 23:23:22.476643    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:23:22.506775    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:23:22.696010    6944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 23:23:22.812010    6944 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 23:23:22.819249    6944 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 23:23:22.827990    6944 start.go:562] Will wait 60s for crictl version
	I0513 23:23:22.836461    6944 ssh_runner.go:195] Run: which crictl
	I0513 23:23:22.852082    6944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 23:23:22.902751    6944 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 23:23:22.909552    6944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:23:22.946475    6944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:23:22.977007    6944 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 23:23:22.977007    6944 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 23:23:22.985995    6944 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 23:23:22.985995    6944 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 23:23:22.985995    6944 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 23:23:22.985995    6944 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 23:23:22.987995    6944 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 23:23:22.987995    6944 ip.go:210] interface addr: 172.23.96.1/20
	I0513 23:23:22.996008    6944 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 23:23:23.002885    6944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:23:23.023928    6944 mustload.go:65] Loading cluster: ha-586300
	I0513 23:23:23.023993    6944 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:23:23.025095    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:23:24.937084    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:23:24.937711    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:24.937711    6944 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:23:24.938316    6944 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300 for IP: 172.23.108.85
	I0513 23:23:24.938316    6944 certs.go:194] generating shared ca certs ...
	I0513 23:23:24.938316    6944 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:23:24.938920    6944 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 23:23:24.938920    6944 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 23:23:24.938920    6944 certs.go:256] generating profile certs ...
	I0513 23:23:24.939778    6944 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key
	I0513 23:23:24.939778    6944 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.14ea6552
	I0513 23:23:24.939778    6944 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.14ea6552 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.102.229 172.23.108.85 172.23.109.129 172.23.111.254]
	I0513 23:23:25.214862    6944 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.14ea6552 ...
	I0513 23:23:25.214862    6944 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.14ea6552: {Name:mkb49e1b83900303147cc360608b1f509872b4f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:23:25.217001    6944 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.14ea6552 ...
	I0513 23:23:25.217001    6944 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.14ea6552: {Name:mkb8cf1af3adff27690356140cb79720b0990750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:23:25.217903    6944 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.14ea6552 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt
	I0513 23:23:25.232395    6944 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.14ea6552 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key
	I0513 23:23:25.233145    6944 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key
	I0513 23:23:25.233145    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 23:23:25.233145    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 23:23:25.233145    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 23:23:25.233145    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 23:23:25.233721    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0513 23:23:25.234533    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0513 23:23:25.234672    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0513 23:23:25.234672    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0513 23:23:25.235365    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0513 23:23:25.235365    6944 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0513 23:23:25.235365    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 23:23:25.235365    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 23:23:25.236884    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 23:23:25.237366    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 23:23:25.237366    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0513 23:23:25.238134    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:23:25.238134    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0513 23:23:25.238134    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0513 23:23:25.238134    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:23:27.179411    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:23:27.179411    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:27.179411    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:23:29.502506    6944 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:23:29.502506    6944 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:23:29.502506    6944 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 23:23:29.606338    6944 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0513 23:23:29.615719    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0513 23:23:29.654086    6944 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0513 23:23:29.663013    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0513 23:23:29.693404    6944 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0513 23:23:29.701393    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0513 23:23:29.729540    6944 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0513 23:23:29.736478    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0513 23:23:29.763502    6944 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0513 23:23:29.770315    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0513 23:23:29.797757    6944 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0513 23:23:29.804236    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0513 23:23:29.822594    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 23:23:29.870038    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 23:23:29.914735    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 23:23:29.961048    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 23:23:30.015383    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0513 23:23:30.061556    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 23:23:30.106124    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 23:23:30.151141    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0513 23:23:30.194086    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 23:23:30.245376    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0513 23:23:30.287091    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0513 23:23:30.331908    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0513 23:23:30.362389    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0513 23:23:30.394004    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0513 23:23:30.426290    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0513 23:23:30.456942    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0513 23:23:30.487118    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0513 23:23:30.520944    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0513 23:23:30.564149    6944 ssh_runner.go:195] Run: openssl version
	I0513 23:23:30.581526    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0513 23:23:30.613405    6944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0513 23:23:30.619453    6944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 23:23:30.628184    6944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0513 23:23:30.645733    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 23:23:30.677254    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 23:23:30.708279    6944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:23:30.718162    6944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:23:30.726931    6944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:23:30.744725    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 23:23:30.772167    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0513 23:23:30.800631    6944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0513 23:23:30.806990    6944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 23:23:30.816157    6944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0513 23:23:30.831934    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0513 23:23:30.857775    6944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 23:23:30.876211    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0513 23:23:30.893532    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0513 23:23:30.910303    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0513 23:23:30.927180    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0513 23:23:30.945313    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0513 23:23:30.962918    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0513 23:23:30.971176    6944 kubeadm.go:928] updating node {m02 172.23.108.85 8443 v1.30.0 docker true true} ...
	I0513 23:23:30.971176    6944 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-586300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.108.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 23:23:30.971176    6944 kube-vip.go:115] generating kube-vip config ...
	I0513 23:23:30.983192    6944 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0513 23:23:31.008593    6944 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0513 23:23:31.008853    6944 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0513 23:23:31.019720    6944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 23:23:31.040545    6944 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 23:23:31.048508    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0513 23:23:31.066078    6944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0513 23:23:31.096784    6944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 23:23:31.125110    6944 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0513 23:23:31.164527    6944 ssh_runner.go:195] Run: grep 172.23.111.254	control-plane.minikube.internal$ /etc/hosts
	I0513 23:23:31.169973    6944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:23:31.199101    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:23:31.402186    6944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:23:31.438001    6944 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.108.85 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:23:31.438119    6944 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 23:23:31.441957    6944 out.go:177] * Verifying Kubernetes components...
	I0513 23:23:31.444153    6944 out.go:177] * Enabled addons: 
	I0513 23:23:31.438307    6944 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:23:31.445527    6944 addons.go:505] duration metric: took 7.407ms for enable addons: enabled=[]
	I0513 23:23:31.460612    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:23:31.685740    6944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:23:31.714086    6944 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:23:31.714712    6944 kapi.go:59] client config for ha-586300: &rest.Config{Host:"https://172.23.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0513 23:23:31.714852    6944 kubeadm.go:477] Overriding stale ClientConfig host https://172.23.111.254:8443 with https://172.23.102.229:8443
	I0513 23:23:31.715749    6944 cert_rotation.go:137] Starting client certificate rotation controller
	I0513 23:23:31.715802    6944 node_ready.go:35] waiting up to 6m0s for node "ha-586300-m02" to be "Ready" ...
	I0513 23:23:31.715802    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:31.715802    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:31.715802    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:31.715802    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:31.730952    6944 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0513 23:23:32.230267    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:32.230396    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.230396    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.230396    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.234752    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:32.235918    6944 node_ready.go:49] node "ha-586300-m02" has status "Ready":"True"
	I0513 23:23:32.235918    6944 node_ready.go:38] duration metric: took 520.09ms for node "ha-586300-m02" to be "Ready" ...
	I0513 23:23:32.235918    6944 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:23:32.235918    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:23:32.235918    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.235918    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.235918    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.247320    6944 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0513 23:23:32.260001    6944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
	I0513 23:23:32.260001    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4qbhd
	I0513 23:23:32.260001    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.260001    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.260001    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.263689    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:32.264914    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:23:32.264914    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.264914    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.264914    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.269172    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:32.269522    6944 pod_ready.go:92] pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace has status "Ready":"True"
	I0513 23:23:32.269522    6944 pod_ready.go:81] duration metric: took 9.5205ms for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
	I0513 23:23:32.269522    6944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
	I0513 23:23:32.269522    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wj8z7
	I0513 23:23:32.270056    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.270056    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.270144    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.273467    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:32.274789    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:23:32.274789    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.274789    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.274789    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.278261    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:32.279067    6944 pod_ready.go:92] pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace has status "Ready":"True"
	I0513 23:23:32.279067    6944 pod_ready.go:81] duration metric: took 9.5445ms for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
	I0513 23:23:32.279067    6944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:23:32.279190    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300
	I0513 23:23:32.279190    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.279190    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.279190    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.286484    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:23:32.287470    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:23:32.287470    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.287470    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.287470    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.291259    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:32.292273    6944 pod_ready.go:92] pod "etcd-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:23:32.292273    6944 pod_ready.go:81] duration metric: took 13.1399ms for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:23:32.292273    6944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:23:32.292273    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:32.292273    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.292273    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.292273    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.296712    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:32.296712    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:32.297713    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.297713    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.297713    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.301913    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:32.796266    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:32.796319    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.796319    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.796319    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.800507    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:32.802152    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:32.802152    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:32.802152    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:32.802246    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:32.817113    6944 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0513 23:23:33.303381    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:33.303381    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:33.303465    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:33.303465    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:33.311934    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:23:33.313094    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:33.313094    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:33.313094    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:33.313637    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:33.319328    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:33.795738    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:33.795738    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:33.795738    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:33.795738    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:33.800744    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:33.803675    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:33.803768    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:33.803830    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:33.803830    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:33.808743    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:34.303622    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:34.303622    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:34.303622    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:34.303622    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:34.319837    6944 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0513 23:23:34.320737    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:34.321271    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:34.321271    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:34.321271    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:34.326105    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:34.327096    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:34.795220    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:34.795220    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:34.795220    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:34.795220    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:34.799302    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:34.800729    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:34.800729    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:34.800729    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:34.800785    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:34.803972    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:35.303475    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:35.303535    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:35.303664    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:35.303749    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:35.316928    6944 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0513 23:23:35.318421    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:35.318482    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:35.318482    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:35.318482    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:35.341166    6944 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0513 23:23:35.796752    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:35.796752    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:35.796752    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:35.796752    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:35.840717    6944 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0513 23:23:35.841695    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:35.841695    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:35.841695    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:35.841695    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:35.847299    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:36.305318    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:36.305318    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:36.305415    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:36.305415    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:36.313751    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:23:36.314995    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:36.314995    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:36.314995    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:36.314995    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:36.320177    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:36.805415    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:36.805415    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:36.805415    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:36.805415    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:36.809813    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:36.811110    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:36.811175    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:36.811175    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:36.811175    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:36.815374    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:36.815374    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:37.293065    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:37.293065    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:37.293163    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:37.293163    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:37.298340    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:37.299774    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:37.299774    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:37.299774    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:37.299774    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:37.305382    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:37.807049    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:37.807049    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:37.807274    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:37.807401    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:37.812681    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:37.815005    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:37.815070    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:37.815070    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:37.815070    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:37.819241    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:38.293206    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:38.293206    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:38.293206    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:38.293206    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:38.297787    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:38.298953    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:38.298953    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:38.298953    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:38.298953    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:38.304240    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:38.793580    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:38.793716    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:38.793783    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:38.793783    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:38.799546    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:38.800379    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:38.800467    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:38.800467    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:38.800467    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:38.804149    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:39.297375    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:39.297762    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:39.297762    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:39.297762    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:39.302091    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:39.303981    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:39.303981    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:39.303981    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:39.303981    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:39.309015    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:39.309015    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:39.798492    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:39.798638    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:39.798638    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:39.798638    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:39.802292    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:39.804144    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:39.804232    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:39.804232    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:39.804232    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:39.808475    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:40.298209    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:40.298209    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:40.298209    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:40.298209    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:40.304958    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:23:40.306073    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:40.306130    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:40.306130    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:40.306130    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:40.315684    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 23:23:40.797839    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:40.797839    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:40.798011    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:40.798011    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:40.804653    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:23:40.806653    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:40.806653    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:40.806653    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:40.806653    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:40.810206    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:41.294040    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:41.294040    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:41.294040    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:41.294040    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:41.299082    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:41.300436    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:41.300436    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:41.300436    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:41.300436    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:41.305080    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:41.793733    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:41.793839    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:41.793839    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:41.793839    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:41.797688    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:41.799983    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:41.800084    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:41.800084    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:41.800084    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:41.806552    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:23:41.807571    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:42.294141    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:42.294350    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:42.294350    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:42.294350    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:42.307262    6944 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0513 23:23:42.310323    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:42.310323    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:42.310323    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:42.310323    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:42.315267    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:42.794043    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:42.794447    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:42.794447    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:42.794447    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:42.799705    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:42.801103    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:42.801103    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:42.801103    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:42.801103    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:42.805673    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:43.297009    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:43.297009    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:43.297009    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:43.297009    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:43.301664    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:43.303395    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:43.303465    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:43.303465    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:43.303465    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:43.307532    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:43.798658    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:43.798731    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:43.798731    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:43.798731    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:43.803138    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:43.804830    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:43.804902    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:43.804902    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:43.804902    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:43.810085    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:43.810912    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:44.297762    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:44.297818    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:44.297841    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:44.297841    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:44.312872    6944 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0513 23:23:44.314199    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:44.314199    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:44.314199    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:44.314199    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:44.322806    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:23:44.798469    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:44.798582    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:44.798582    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:44.798582    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:44.802632    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:44.804255    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:44.804255    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:44.804255    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:44.804255    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:44.809181    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:45.302639    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:45.302922    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:45.302922    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:45.302922    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:45.310376    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:23:45.311644    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:45.311644    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:45.311644    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:45.311644    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:45.316477    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:45.805349    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:45.805349    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:45.805349    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:45.805349    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:45.811314    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:45.812739    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:45.812801    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:45.812801    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:45.812801    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:45.818012    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:45.819206    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:46.305362    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:46.305362    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:46.305362    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:46.305362    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:46.308894    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:46.310941    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:46.311001    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:46.311059    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:46.311059    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:46.315821    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:46.805934    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:46.805934    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:46.805934    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:46.805934    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:46.810599    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:46.811823    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:46.811823    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:46.811886    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:46.811886    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:46.815711    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:47.308664    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:47.308758    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:47.308758    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:47.308758    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:47.316735    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:23:47.317666    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:47.317666    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:47.317767    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:47.317767    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:47.321991    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:47.794332    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:47.794332    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:47.794332    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:47.794332    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:47.800386    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:47.801179    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:47.801179    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:47.801179    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:47.801179    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:47.805251    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:48.294687    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:48.294687    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:48.294687    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:48.294687    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:48.299712    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:48.301026    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:48.301182    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:48.301182    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:48.301182    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:48.306357    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:48.307228    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:48.795569    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:48.795569    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:48.795569    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:48.795569    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:48.800742    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:48.801458    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:48.801458    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:48.801458    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:48.801458    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:48.806016    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:49.297278    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:49.297362    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:49.297362    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:49.297362    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:49.301673    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:49.303203    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:49.303203    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:49.303203    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:49.303203    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:49.307373    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:49.796100    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:49.796212    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:49.796212    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:49.796212    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:49.801611    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:49.802328    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:49.802328    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:49.802328    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:49.802436    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:49.806479    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:50.296357    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:50.296357    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:50.296357    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:50.296357    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:50.303926    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:23:50.305429    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:50.305489    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:50.305489    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:50.305489    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:50.313005    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:23:50.313358    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:50.798938    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:50.798938    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:50.798938    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:50.798938    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:50.806350    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:23:50.807298    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:50.807298    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:50.807298    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:50.807298    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:50.810608    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:51.301060    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:51.301060    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:51.301060    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:51.301217    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:51.305669    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:51.306801    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:51.306801    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:51.306801    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:51.306858    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:51.311432    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:51.802743    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:51.803028    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:51.803142    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:51.803142    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:51.809772    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:23:51.811049    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:51.811049    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:51.811049    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:51.811049    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:51.814868    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:52.301210    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:52.301210    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:52.301210    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:52.301210    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:52.310317    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 23:23:52.312092    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:52.312162    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:52.312162    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:52.312162    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:52.316006    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:52.316765    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:52.804067    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:52.804607    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:52.804607    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:52.804607    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:52.810593    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:52.811614    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:52.811614    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:52.811614    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:52.811614    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:52.817192    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:53.302478    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:53.302478    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:53.302478    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:53.302478    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:53.308996    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:23:53.309729    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:53.309729    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:53.309729    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:53.309729    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:53.314513    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:53.803262    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:53.803349    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:53.803349    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:53.803349    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:53.807514    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:53.809439    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:53.809540    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:53.809540    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:53.809540    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:53.813785    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:54.294136    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:54.294136    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:54.294136    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:54.294136    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:54.302306    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:23:54.303939    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:54.304004    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:54.304004    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:54.304004    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:54.313679    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 23:23:54.796389    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:54.796692    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:54.796692    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:54.796692    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:54.801709    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:54.802304    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:54.802304    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:54.802304    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:54.802304    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:54.805967    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:54.807480    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:55.298759    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:55.298759    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:55.298759    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:55.298759    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:55.303334    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:55.304795    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:55.304795    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:55.304795    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:55.304795    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:55.309611    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:55.801499    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:55.801605    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:55.801605    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:55.801605    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:55.806227    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:55.808324    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:55.808393    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:55.808393    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:55.808393    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:55.812689    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:56.301290    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:56.301657    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:56.301657    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:56.301657    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:56.312970    6944 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0513 23:23:56.314480    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:56.314540    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:56.314540    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:56.314540    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:56.323473    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:23:56.801008    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:56.801008    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:56.801008    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:56.801008    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:56.807334    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:23:56.808125    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:56.808125    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:56.808125    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:56.808125    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:56.813367    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:23:56.814786    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:57.299387    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:57.299387    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:57.299387    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:57.299387    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:57.304021    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:57.305372    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:57.305372    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:57.305439    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:57.305439    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:57.308632    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:57.799610    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:57.799610    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:57.799610    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:57.799610    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:57.803997    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:57.805132    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:57.805132    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:57.805226    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:57.805226    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:57.809400    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:58.299794    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:58.299794    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:58.299916    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:58.299916    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:58.307639    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:23:58.308995    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:58.309068    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:58.309068    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:58.309106    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:58.315522    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:23:58.801403    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:58.801403    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:58.801403    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:58.801403    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:58.810456    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 23:23:58.811261    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:58.811261    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:58.811261    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:58.811261    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:58.815945    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:23:58.817216    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:23:59.302029    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:59.302122    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:59.302122    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:59.302122    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:59.308317    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:23:59.309935    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:59.309935    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:59.309935    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:59.309935    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:59.313316    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:23:59.802781    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:23:59.803039    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:59.803117    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:59.803117    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:59.809251    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:23:59.810224    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:23:59.810224    6944 round_trippers.go:469] Request Headers:
	I0513 23:23:59.810224    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:23:59.810224    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:23:59.814218    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:00.298628    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:00.298628    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:00.298628    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:00.298628    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:00.304844    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:00.306000    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:00.306000    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:00.306060    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:00.306060    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:00.310475    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:00.798655    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:00.798844    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:00.798911    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:00.798911    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:00.806087    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:00.807732    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:00.807732    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:00.807793    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:00.807793    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:00.810979    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:01.298407    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:01.298407    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:01.298407    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:01.298407    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:01.302239    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:01.303843    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:01.303843    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:01.303843    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:01.303946    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:01.309107    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:01.309881    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:01.797627    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:01.797627    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:01.797627    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:01.797627    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:01.807715    6944 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0513 23:24:01.808672    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:01.808672    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:01.808672    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:01.808734    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:01.812917    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:02.301403    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:02.301504    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:02.301504    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:02.301504    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:02.313029    6944 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0513 23:24:02.315043    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:02.315043    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:02.315043    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:02.315043    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:02.329886    6944 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0513 23:24:02.802058    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:02.802058    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:02.802058    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:02.802058    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:02.806616    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:02.808292    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:02.808314    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:02.808314    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:02.808314    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:02.813383    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:03.303177    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:03.303177    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:03.303177    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:03.303177    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:03.308520    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:03.309978    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:03.310044    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:03.310044    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:03.310097    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:03.318154    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:24:03.319260    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:03.806178    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:03.806178    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:03.806414    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:03.806414    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:03.810857    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:03.811617    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:03.811617    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:03.811684    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:03.811684    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:03.816466    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:04.306492    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:04.306577    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:04.306577    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:04.306577    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:04.312032    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:04.314115    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:04.314115    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:04.314115    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:04.314115    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:04.319061    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:04.806593    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:04.806665    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:04.806738    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:04.806738    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:04.812554    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:04.813825    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:04.813825    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:04.813825    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:04.813825    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:04.818351    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:05.305637    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:05.305637    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:05.305637    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:05.305637    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:05.311035    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:05.312153    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:05.312255    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:05.312255    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:05.312329    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:05.318662    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:05.319648    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:05.805013    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:05.805013    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:05.805013    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:05.805013    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:05.812802    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:05.814424    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:05.814424    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:05.814424    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:05.814424    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:05.818136    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:06.304280    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:06.304349    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:06.304349    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:06.304349    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:06.310157    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:06.311798    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:06.311798    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:06.311798    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:06.311798    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:06.315673    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:06.803017    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:06.803086    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:06.803086    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:06.803086    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:06.807909    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:06.810019    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:06.810107    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:06.810107    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:06.810107    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:06.814139    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:07.302122    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:07.302122    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:07.302122    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:07.302122    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:07.306957    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:07.307993    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:07.307993    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:07.307993    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:07.307993    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:07.311179    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:07.805133    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:07.805133    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:07.805133    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:07.805133    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:07.810272    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:07.811149    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:07.811216    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:07.811216    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:07.811216    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:07.818370    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:07.818370    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:08.304913    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:08.304913    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:08.304913    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:08.304913    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:08.311582    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:08.312778    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:08.312844    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:08.312844    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:08.312844    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:08.316566    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:08.804201    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:08.804259    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:08.804327    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:08.804327    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:08.809138    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:08.810774    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:08.810774    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:08.810774    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:08.810774    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:08.816806    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:09.305398    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:09.305398    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:09.305499    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:09.305499    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:09.311113    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:09.312432    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:09.312432    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:09.312504    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:09.312504    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:09.316904    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:09.809170    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:09.809170    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:09.809259    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:09.809259    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:09.813904    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:09.815353    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:09.815410    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:09.815410    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:09.815410    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:09.819985    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:09.822553    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:10.294659    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:10.294732    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:10.294803    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:10.294803    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:10.300801    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:10.302037    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:10.302104    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:10.302104    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:10.302172    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:10.305423    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:10.810258    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:10.810334    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:10.810334    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:10.810334    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:10.817816    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:10.819053    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:10.819053    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:10.819053    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:10.819053    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:10.823653    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:11.296438    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:11.296766    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:11.296835    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:11.296835    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:11.303708    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:11.304471    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:11.304471    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:11.304471    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:11.304533    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:11.309890    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:11.809710    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:11.809962    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:11.809962    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:11.809962    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:11.814112    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:11.815677    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:11.815677    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:11.815793    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:11.815793    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:11.819971    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:12.294997    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:12.295108    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:12.295108    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:12.295108    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:12.300452    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:12.301986    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:12.302068    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:12.302068    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:12.302142    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:12.309688    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:12.312085    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:12.808553    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:12.808636    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:12.808636    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:12.808636    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:12.813748    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:12.815407    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:12.815407    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:12.815407    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:12.815407    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:12.821149    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:13.296194    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:13.296194    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:13.296194    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:13.296194    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:13.304804    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:24:13.305887    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:13.305951    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:13.305951    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:13.305951    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:13.310330    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:13.803998    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:13.804190    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:13.804247    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:13.804247    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:13.808510    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:13.810602    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:13.810642    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:13.810683    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:13.810683    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:13.815970    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:14.298934    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:14.298996    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:14.298996    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:14.298996    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:14.307500    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:24:14.308640    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:14.308640    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:14.308695    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:14.308695    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:14.314554    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:14.315388    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:14.805048    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:14.805271    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:14.805271    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:14.805367    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:14.810231    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:14.811458    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:14.811549    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:14.811549    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:14.811626    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:14.819054    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:15.300862    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:15.300862    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:15.300862    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:15.300958    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:15.304991    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:15.306987    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:15.307063    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:15.307063    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:15.307063    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:15.311424    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:15.800000    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:15.800000    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:15.800189    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:15.800189    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:15.804400    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:15.806370    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:15.806444    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:15.806444    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:15.806444    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:15.810054    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:16.299648    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:16.299724    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:16.299724    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:16.299724    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:16.305506    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:16.306928    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:16.307029    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:16.307029    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:16.307029    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:16.315305    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:24:16.798720    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:16.798838    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:16.798838    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:16.798838    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:16.802956    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:16.804864    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:16.804864    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:16.804864    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:16.804864    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:16.809027    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:16.810894    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:17.303961    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:17.304018    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:17.304018    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:17.304018    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:17.310743    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:17.311505    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:17.311505    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:17.311505    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:17.311627    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:17.316036    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:17.796644    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:17.796644    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:17.796644    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:17.796644    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:17.800813    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:17.802509    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:17.802509    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:17.802509    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:17.802509    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:17.806076    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:18.300968    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:18.300968    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:18.301064    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:18.301064    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:18.309174    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:24:18.311038    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:18.311104    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:18.311104    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:18.311104    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:18.315719    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:18.802455    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:18.802455    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:18.802455    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:18.802561    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:18.807164    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:18.808942    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:18.809027    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:18.809027    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:18.809027    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:18.812449    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:18.813914    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:19.303892    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:19.304071    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:19.304071    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:19.304071    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:19.310096    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:19.311098    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:19.311098    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:19.311098    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:19.311098    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:19.316270    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:19.802353    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:19.802800    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:19.802800    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:19.802800    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:19.808288    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:19.808762    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:19.808762    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:19.808762    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:19.808762    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:19.816081    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:20.299394    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:20.299469    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:20.299469    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:20.299469    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:20.305827    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:20.307267    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:20.307267    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:20.307267    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:20.307267    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:20.315716    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:24:20.798553    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:20.798553    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:20.798553    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:20.798553    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:20.802121    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:20.803475    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:20.803475    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:20.803475    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:20.803475    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:20.806706    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:21.297883    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:21.297956    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:21.298028    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:21.298028    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:21.302258    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:21.303953    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:21.304051    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:21.304051    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:21.304051    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:21.308358    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:21.309706    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:21.796105    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:21.796105    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:21.796105    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:21.796105    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:21.802749    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:21.803520    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:21.803520    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:21.803580    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:21.803580    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:21.808425    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:22.298805    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:22.298871    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:22.298871    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:22.298871    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:22.303347    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:22.304639    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:22.304639    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:22.304639    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:22.304639    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:22.312400    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:22.796971    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:22.796971    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:22.796971    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:22.796971    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:22.802331    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:22.803583    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:22.803583    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:22.803583    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:22.803583    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:22.806880    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:23.309256    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:23.309256    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:23.309256    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:23.309256    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:23.314767    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:23.315547    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:23.315618    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:23.315618    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:23.315618    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:23.320504    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:23.321684    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:23.808853    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:23.808853    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:23.808853    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:23.808853    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:23.814415    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:23.815413    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:23.815413    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:23.815413    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:23.815413    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:23.819772    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:24.310333    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:24.310399    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:24.310399    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:24.310399    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:24.315373    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:24.317030    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:24.317030    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:24.317030    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:24.317030    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:24.330673    6944 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0513 23:24:24.795691    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:24.795770    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:24.795770    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:24.795770    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:24.800215    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:24.801932    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:24.802003    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:24.802003    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:24.802003    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:24.806143    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:25.299997    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:25.300088    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:25.300088    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:25.300088    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:25.304556    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:25.306300    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:25.306300    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:25.306300    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:25.306300    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:25.310677    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:25.798942    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:25.798942    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:25.798942    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:25.798942    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:25.803412    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:25.805429    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:25.805527    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:25.805527    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:25.805599    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:25.808976    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:25.810299    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:26.300365    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:26.300442    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:26.300505    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:26.300505    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:26.305408    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:26.306759    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:26.306759    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:26.306834    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:26.306834    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:26.310587    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:26.799440    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:26.799440    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:26.799440    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:26.799440    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:26.808038    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:24:26.809703    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:26.809783    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:26.809783    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:26.809783    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:26.814091    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:27.297851    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:27.297851    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:27.297851    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:27.297851    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:27.302623    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:27.305065    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:27.305065    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:27.305139    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:27.305139    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:27.310447    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:27.809644    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:27.809644    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:27.809870    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:27.809870    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:27.814361    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:27.815182    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:27.815182    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:27.815248    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:27.815248    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:27.818955    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:27.820099    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:28.297764    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:28.297874    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:28.297874    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:28.297874    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:28.302241    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:28.303860    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:28.303860    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:28.303860    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:28.303860    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:28.310670    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:28.801630    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:28.801743    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:28.801743    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:28.801743    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:28.806294    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:28.807959    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:28.807959    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:28.807959    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:28.807959    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:28.811319    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:29.302555    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:29.302555    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:29.302648    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:29.302648    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:29.307472    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:29.308817    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:29.308817    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:29.308817    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:29.308817    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:29.313194    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:29.801140    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:29.801218    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:29.801218    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:29.801218    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:29.806848    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:29.807997    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:29.808119    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:29.808119    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:29.808119    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:29.815887    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:30.299685    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:30.299685    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:30.299791    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:30.299791    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:30.307143    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:30.308546    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:30.308602    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:30.308602    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:30.308602    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:30.312804    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:30.313510    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:30.798765    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:30.798926    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:30.798926    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:30.798926    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:30.803723    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:30.804420    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:30.804523    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:30.804523    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:30.804523    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:30.809121    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:31.295648    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:31.295648    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:31.295648    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:31.295648    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:31.299576    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:31.300743    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:31.300743    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:31.300828    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:31.300828    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:31.304049    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:31.799839    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:31.800220    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:31.800220    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:31.800220    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:31.804497    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:31.805561    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:31.805561    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:31.805561    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:31.805561    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:31.810605    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:32.299027    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:32.299027    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:32.299027    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:32.299027    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:32.307065    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:24:32.307625    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:32.307625    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:32.307625    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:32.307625    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:32.312204    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:32.800429    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:32.800547    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:32.800547    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:32.800547    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:32.826392    6944 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0513 23:24:32.829193    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:32.829263    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:32.829263    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:32.829263    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:32.836379    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:32.837453    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:33.299761    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:33.299761    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:33.299848    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:33.299848    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:33.304798    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:33.305653    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:33.305653    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:33.305723    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:33.305723    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:33.309418    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:33.801608    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:33.801608    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:33.801608    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:33.801608    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:33.806167    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:33.807379    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:33.807379    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:33.807379    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:33.807379    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:33.813976    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:34.303669    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:34.303732    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:34.303732    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:34.303788    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:34.313132    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 23:24:34.314216    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:34.314216    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:34.314216    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:34.314216    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:34.320644    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:34.806343    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:34.806524    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:34.806524    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:34.806524    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:34.810951    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:34.812377    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:34.812377    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:34.812377    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:34.812377    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:34.818031    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:35.306779    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:35.306779    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:35.306779    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:35.306779    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:35.310892    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:35.313076    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:35.313140    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:35.313140    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:35.313140    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:35.316257    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:35.317243    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:35.807010    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:35.807010    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:35.807010    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:35.807010    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:35.811348    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:35.812484    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:35.812636    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:35.812636    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:35.812636    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:35.816958    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:36.309280    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:36.309345    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:36.309412    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:36.309412    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:36.313968    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:36.315369    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:36.315369    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:36.315369    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:36.315369    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:36.320096    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:36.810076    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:36.810394    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:36.810394    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:36.810394    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:36.815445    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:36.815445    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:36.815445    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:36.816639    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:36.816639    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:36.820872    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:37.309407    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:37.309407    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:37.309407    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:37.309407    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:37.314978    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:37.315750    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:37.315750    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:37.315750    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:37.315750    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:37.319616    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:37.320360    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:24:37.809720    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:37.809720    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:37.809720    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:37.809720    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:37.815014    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:37.816081    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:37.816081    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:37.816177    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:37.816177    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:37.820420    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:38.297198    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:38.297439    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:38.297439    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:38.297439    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:38.302712    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:38.304074    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:38.304074    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:38.304185    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:38.304185    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:38.312420    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:38.800395    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:38.800461    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:38.800461    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:38.800461    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:38.806641    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:38.807815    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:38.807815    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:38.807866    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:38.807866    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:38.811528    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:24:39.303042    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:39.303042    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:39.303042    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:39.303042    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:39.308570    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:24:39.309655    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:39.309725    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:39.309725    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:39.309725    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:39.316985    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:24:39.805299    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:24:39.805299    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:39.805299    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:39.805381    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:39.809389    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:24:39.811275    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:24:39.811275    6944 round_trippers.go:469] Request Headers:
	I0513 23:24:39.811379    6944 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:24:39.811379    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:24:39.817610    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:24:39.818582    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
** /stderr **
ha_test.go:422: W0513 23:21:32.583862    6944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 23:21:32.642934    6944 out.go:291] Setting OutFile to fd 920 ...
I0513 23:21:32.660926    6944 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 23:21:32.660996    6944 out.go:304] Setting ErrFile to fd 840...
I0513 23:21:32.660996    6944 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 23:21:32.673466    6944 mustload.go:65] Loading cluster: ha-586300
I0513 23:21:32.673976    6944 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 23:21:32.674686    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:21:34.607542    6944 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0513 23:21:34.607605    6944 main.go:141] libmachine: [stderr =====>] : 
W0513 23:21:34.607686    6944 host.go:58] "ha-586300-m02" host status: Stopped
I0513 23:21:34.611377    6944 out.go:177] * Starting "ha-586300-m02" control-plane node in "ha-586300" cluster
I0513 23:21:34.614135    6944 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0513 23:21:34.614344    6944 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0513 23:21:34.614344    6944 cache.go:56] Caching tarball of preloaded images
I0513 23:21:34.614934    6944 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0513 23:21:34.615191    6944 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0513 23:21:34.615239    6944 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
I0513 23:21:34.617914    6944 start.go:360] acquireMachinesLock for ha-586300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0513 23:21:34.618137    6944 start.go:364] duration metric: took 110.8µs to acquireMachinesLock for "ha-586300-m02"
I0513 23:21:34.618397    6944 start.go:96] Skipping create...Using existing machine configuration
I0513 23:21:34.618470    6944 fix.go:54] fixHost starting: m02
I0513 23:21:34.618925    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:21:36.566193    6944 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0513 23:21:36.566193    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:21:36.566193    6944 fix.go:112] recreateIfNeeded on ha-586300-m02: state=Stopped err=<nil>
W0513 23:21:36.566193    6944 fix.go:138] unexpected machine state, will restart: <nil>
I0513 23:21:36.568836    6944 out.go:177] * Restarting existing hyperv VM for "ha-586300-m02" ...
I0513 23:21:36.571599    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-586300-m02
I0513 23:21:39.403751    6944 main.go:141] libmachine: [stdout =====>] : 
I0513 23:21:39.403813    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:21:39.403813    6944 main.go:141] libmachine: Waiting for host to start...
I0513 23:21:39.403813    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:21:41.442202    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:21:41.442202    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:21:41.442202    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:21:43.686297    6944 main.go:141] libmachine: [stdout =====>] : 
I0513 23:21:43.687296    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:21:44.694318    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:21:46.665250    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:21:46.665958    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:21:46.666038    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:21:48.958964    6944 main.go:141] libmachine: [stdout =====>] : 
I0513 23:21:48.959287    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:21:49.972718    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:21:51.933500    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:21:51.933563    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:21:51.933618    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:21:54.229504    6944 main.go:141] libmachine: [stdout =====>] : 
I0513 23:21:54.230449    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:21:55.239035    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:21:57.210445    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:21:57.210445    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:21:57.210445    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:21:59.468805    6944 main.go:141] libmachine: [stdout =====>] : 
I0513 23:21:59.469481    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:00.484489    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:02.508073    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:02.508073    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:02.508073    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:04.860530    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:04.860530    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:04.862473    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:06.792688    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:06.792961    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:06.792961    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:09.080242    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:09.080242    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:09.080800    6944 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
I0513 23:22:09.082231    6944 machine.go:94] provisionDockerMachine start ...
I0513 23:22:09.082763    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:11.013369    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:11.013369    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:11.014175    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:13.298624    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:13.298624    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:13.302922    6944 main.go:141] libmachine: Using SSH client type: native
I0513 23:22:13.303445    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
I0513 23:22:13.303445    6944 main.go:141] libmachine: About to run SSH command:
hostname
I0513 23:22:13.428566    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0513 23:22:13.428566    6944 buildroot.go:166] provisioning hostname "ha-586300-m02"
I0513 23:22:13.428666    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:15.391479    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:15.391479    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:15.391479    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:17.692402    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:17.692402    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:17.696753    6944 main.go:141] libmachine: Using SSH client type: native
I0513 23:22:17.697219    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
I0513 23:22:17.697219    6944 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-586300-m02 && echo "ha-586300-m02" | sudo tee /etc/hostname
I0513 23:22:17.850301    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-586300-m02

                                                
                                                
I0513 23:22:17.850301    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:19.769596    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:19.769596    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:19.770448    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:22.045897    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:22.046039    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:22.050710    6944 main.go:141] libmachine: Using SSH client type: native
I0513 23:22:22.051381    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
I0513 23:22:22.051381    6944 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-586300-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-586300-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-586300-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0513 23:22:22.209175    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0513 23:22:22.209175    6944 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
I0513 23:22:22.209175    6944 buildroot.go:174] setting up certificates
I0513 23:22:22.209175    6944 provision.go:84] configureAuth start
I0513 23:22:22.209175    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:24.165903    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:24.165903    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:24.165903    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:26.531962    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:26.531962    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:26.531962    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:28.496886    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:28.497180    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:28.497180    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:30.792371    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:30.792829    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:30.792881    6944 provision.go:143] copyHostCerts
I0513 23:22:30.793161    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
I0513 23:22:30.793542    6944 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
I0513 23:22:30.793595    6944 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
I0513 23:22:30.793979    6944 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
I0513 23:22:30.795075    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
I0513 23:22:30.795368    6944 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
I0513 23:22:30.795428    6944 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
I0513 23:22:30.795919    6944 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
I0513 23:22:30.796860    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
I0513 23:22:30.796860    6944 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
I0513 23:22:30.796860    6944 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
I0513 23:22:30.797412    6944 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
I0513 23:22:30.798474    6944 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-586300-m02 san=[127.0.0.1 172.23.108.85 ha-586300-m02 localhost minikube]
I0513 23:22:31.162932    6944 provision.go:177] copyRemoteCerts
I0513 23:22:31.170929    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0513 23:22:31.170929    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:33.098758    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:33.098758    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:33.099869    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:35.394202    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:35.394202    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:35.394202    6944 sshutil.go:53] new ssh client: &{IP:172.23.108.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
I0513 23:22:35.499367    6944 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3282288s)
I0513 23:22:35.499367    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0513 23:22:35.499367    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0513 23:22:35.544188    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0513 23:22:35.544188    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0513 23:22:35.586773    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0513 23:22:35.586773    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0513 23:22:35.629894    6944 provision.go:87] duration metric: took 13.4200766s to configureAuth
I0513 23:22:35.629894    6944 buildroot.go:189] setting minikube options for container-runtime
I0513 23:22:35.630503    6944 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 23:22:35.630503    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:37.569285    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:37.569964    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:37.570049    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:39.860841    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:39.861047    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:39.865045    6944 main.go:141] libmachine: Using SSH client type: native
I0513 23:22:39.865045    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
I0513 23:22:39.865045    6944 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0513 23:22:39.995769    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0513 23:22:39.995769    6944 buildroot.go:70] root file system type: tmpfs
I0513 23:22:39.995769    6944 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0513 23:22:39.995769    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:41.928614    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:41.928614    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:41.928614    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:44.213772    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:44.213977    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:44.218709    6944 main.go:141] libmachine: Using SSH client type: native
I0513 23:22:44.219176    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
I0513 23:22:44.219320    6944 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0513 23:22:44.375539    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0513 23:22:44.375688    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:46.291242    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:46.291242    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:46.291242    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:48.591482    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:48.591482    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:48.596164    6944 main.go:141] libmachine: Using SSH client type: native
I0513 23:22:48.596560    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
I0513 23:22:48.596560    6944 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0513 23:22:50.983510    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0513 23:22:50.983510    6944 machine.go:97] duration metric: took 41.8992695s to provisionDockerMachine
I0513 23:22:50.983510    6944 start.go:293] postStartSetup for "ha-586300-m02" (driver="hyperv")
I0513 23:22:50.983510    6944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0513 23:22:50.992795    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0513 23:22:50.992795    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:52.900212    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:52.900212    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:52.900212    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:55.189460    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:55.189817    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:55.190260    6944 sshutil.go:53] new ssh client: &{IP:172.23.108.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
I0513 23:22:55.299965    6944 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3069598s)
I0513 23:22:55.308215    6944 ssh_runner.go:195] Run: cat /etc/os-release
I0513 23:22:55.314529    6944 info.go:137] Remote host: Buildroot 2023.02.9
I0513 23:22:55.314529    6944 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
I0513 23:22:55.315348    6944 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
I0513 23:22:55.316137    6944 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
I0513 23:22:55.316137    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
I0513 23:22:55.332411    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0513 23:22:55.354327    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
I0513 23:22:55.400417    6944 start.go:296] duration metric: took 4.4166904s for postStartSetup
I0513 23:22:55.408422    6944 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0513 23:22:55.408422    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:22:57.361431    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:22:57.362067    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:57.362144    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:22:59.655081    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:22:59.655081    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:22:59.655081    6944 sshutil.go:53] new ssh client: &{IP:172.23.108.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
I0513 23:22:59.765198    6944 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.3565625s)
I0513 23:22:59.765289    6944 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0513 23:22:59.774041    6944 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0513 23:22:59.846155    6944 fix.go:56] duration metric: took 1m25.2236074s for fixHost
I0513 23:22:59.846234    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:23:01.780207    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:23:01.780928    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:01.781006    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:23:04.084412    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:23:04.084487    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:04.088516    6944 main.go:141] libmachine: Using SSH client type: native
I0513 23:23:04.088635    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
I0513 23:23:04.088635    6944 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0513 23:23:04.222779    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715642584.424244294

                                                
                                                
I0513 23:23:04.222927    6944 fix.go:216] guest clock: 1715642584.424244294
I0513 23:23:04.222927    6944 fix.go:229] Guest: 2024-05-13 23:23:04.424244294 +0000 UTC Remote: 2024-05-13 23:22:59.8461551 +0000 UTC m=+87.324792101 (delta=4.578089194s)
I0513 23:23:04.223074    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:23:06.141950    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:23:06.141950    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:06.142029    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:23:08.438265    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:23:08.438265    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:08.443302    6944 main.go:141] libmachine: Using SSH client type: native
I0513 23:23:08.443677    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.85 22 <nil> <nil>}
I0513 23:23:08.443740    6944 main.go:141] libmachine: About to run SSH command:
sudo date -s @1715642584
I0513 23:23:08.586595    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 23:23:04 UTC 2024

                                                
                                                
I0513 23:23:08.586595    6944 fix.go:236] clock set: Mon May 13 23:23:04 UTC 2024
(err=<nil>)
I0513 23:23:08.586595    6944 start.go:83] releasing machines lock for "ha-586300-m02", held for 1m33.9638725s
I0513 23:23:08.586595    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:23:10.546470    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:23:10.546665    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:10.546736    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:23:12.858175    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:23:12.858175    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:12.861354    6944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0513 23:23:12.861521    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:23:12.868785    6944 ssh_runner.go:195] Run: systemctl --version
I0513 23:23:12.868785    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
I0513 23:23:14.859029    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:23:14.859029    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:14.859029    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:23:14.859029    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:14.859029    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:23:14.859029    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
I0513 23:23:17.221423    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:23:17.221498    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:17.221777    6944 sshutil.go:53] new ssh client: &{IP:172.23.108.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
I0513 23:23:17.243799    6944 main.go:141] libmachine: [stdout =====>] : 172.23.108.85

                                                
                                                
I0513 23:23:17.244227    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:17.244607    6944 sshutil.go:53] new ssh client: &{IP:172.23.108.85 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
I0513 23:23:17.504877    6944 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6432936s)
I0513 23:23:17.504877    6944 ssh_runner.go:235] Completed: systemctl --version: (4.6358638s)
I0513 23:23:17.514250    6944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0513 23:23:17.523632    6944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0513 23:23:17.531615    6944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0513 23:23:17.561152    6944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0513 23:23:17.561268    6944 start.go:494] detecting cgroup driver to use...
I0513 23:23:17.561448    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0513 23:23:17.606870    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0513 23:23:17.635064    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0513 23:23:17.656915    6944 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0513 23:23:17.665084    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0513 23:23:17.691845    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0513 23:23:17.720365    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0513 23:23:17.746369    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0513 23:23:17.772352    6944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0513 23:23:17.804234    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0513 23:23:17.829852    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0513 23:23:17.856646    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0513 23:23:17.887135    6944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0513 23:23:17.914355    6944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0513 23:23:17.939359    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0513 23:23:18.125592    6944 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0513 23:23:18.154645    6944 start.go:494] detecting cgroup driver to use...
I0513 23:23:18.163449    6944 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0513 23:23:18.197871    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0513 23:23:18.227301    6944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0513 23:23:18.262278    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0513 23:23:18.293704    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0513 23:23:18.323116    6944 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0513 23:23:18.380426    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0513 23:23:18.404387    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0513 23:23:18.446916    6944 ssh_runner.go:195] Run: which cri-dockerd
I0513 23:23:18.460361    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0513 23:23:18.478079    6944 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0513 23:23:18.520509    6944 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0513 23:23:18.732275    6944 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0513 23:23:18.902992    6944 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0513 23:23:18.902992    6944 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0513 23:23:18.940864    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0513 23:23:19.140925    6944 ssh_runner.go:195] Run: sudo systemctl restart docker
I0513 23:23:21.768578    6944 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6274277s)
I0513 23:23:21.777553    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0513 23:23:21.809596    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0513 23:23:21.840679    6944 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0513 23:23:22.038963    6944 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0513 23:23:22.240558    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0513 23:23:22.439980    6944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0513 23:23:22.476643    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0513 23:23:22.506775    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0513 23:23:22.696010    6944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0513 23:23:22.812010    6944 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0513 23:23:22.819249    6944 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0513 23:23:22.827990    6944 start.go:562] Will wait 60s for crictl version
I0513 23:23:22.836461    6944 ssh_runner.go:195] Run: which crictl
I0513 23:23:22.852082    6944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0513 23:23:22.902751    6944 start.go:578] Version:  0.1.0
RuntimeName:  docker
RuntimeVersion:  26.0.2
RuntimeApiVersion:  v1
I0513 23:23:22.909552    6944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0513 23:23:22.946475    6944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0513 23:23:22.977007    6944 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
I0513 23:23:22.977007    6944 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
I0513 23:23:22.985995    6944 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
I0513 23:23:22.985995    6944 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
I0513 23:23:22.985995    6944 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
I0513 23:23:22.985995    6944 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
I0513 23:23:22.987995    6944 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
I0513 23:23:22.987995    6944 ip.go:210] interface addr: 172.23.96.1/20
I0513 23:23:22.996008    6944 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
I0513 23:23:23.002885    6944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0513 23:23:23.023928    6944 mustload.go:65] Loading cluster: ha-586300
I0513 23:23:23.023993    6944 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 23:23:23.025095    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
I0513 23:23:24.937084    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:23:24.937711    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:24.937711    6944 host.go:66] Checking if "ha-586300" exists ...
I0513 23:23:24.938316    6944 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300 for IP: 172.23.108.85
I0513 23:23:24.938316    6944 certs.go:194] generating shared ca certs ...
I0513 23:23:24.938316    6944 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 23:23:24.938920    6944 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
I0513 23:23:24.938920    6944 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
I0513 23:23:24.938920    6944 certs.go:256] generating profile certs ...
I0513 23:23:24.939778    6944 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key
I0513 23:23:24.939778    6944 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.14ea6552
I0513 23:23:24.939778    6944 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.14ea6552 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.102.229 172.23.108.85 172.23.109.129 172.23.111.254]
I0513 23:23:25.214862    6944 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.14ea6552 ...
I0513 23:23:25.214862    6944 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.14ea6552: {Name:mkb49e1b83900303147cc360608b1f509872b4f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 23:23:25.217001    6944 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.14ea6552 ...
I0513 23:23:25.217001    6944 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.14ea6552: {Name:mkb8cf1af3adff27690356140cb79720b0990750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 23:23:25.217903    6944 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.14ea6552 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt
I0513 23:23:25.232395    6944 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.14ea6552 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key
I0513 23:23:25.233145    6944 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key
I0513 23:23:25.233145    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
I0513 23:23:25.233145    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
I0513 23:23:25.233145    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0513 23:23:25.233145    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0513 23:23:25.233721    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0513 23:23:25.234533    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0513 23:23:25.234672    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0513 23:23:25.234672    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0513 23:23:25.235365    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
W0513 23:23:25.235365    6944 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
I0513 23:23:25.235365    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
I0513 23:23:25.235365    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
I0513 23:23:25.236884    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
I0513 23:23:25.237366    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
I0513 23:23:25.237366    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
I0513 23:23:25.238134    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0513 23:23:25.238134    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
I0513 23:23:25.238134    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
I0513 23:23:25.238134    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
I0513 23:23:27.179411    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 23:23:27.179411    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:27.179411    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
I0513 23:23:29.502506    6944 main.go:141] libmachine: [stdout =====>] : 172.23.102.229

                                                
                                                
I0513 23:23:29.502506    6944 main.go:141] libmachine: [stderr =====>] : 
I0513 23:23:29.502506    6944 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
I0513 23:23:29.606338    6944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
I0513 23:23:29.615719    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0513 23:23:29.654086    6944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
I0513 23:23:29.663013    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
I0513 23:23:29.693404    6944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
I0513 23:23:29.701393    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0513 23:23:29.729540    6944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
I0513 23:23:29.736478    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
I0513 23:23:29.763502    6944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
I0513 23:23:29.770315    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0513 23:23:29.797757    6944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
I0513 23:23:29.804236    6944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
I0513 23:23:29.822594    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0513 23:23:29.870038    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0513 23:23:29.914735    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0513 23:23:29.961048    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0513 23:23:30.015383    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
I0513 23:23:30.061556    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0513 23:23:30.106124    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0513 23:23:30.151141    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0513 23:23:30.194086    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0513 23:23:30.245376    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
I0513 23:23:30.287091    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
I0513 23:23:30.331908    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0513 23:23:30.362389    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
I0513 23:23:30.394004    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0513 23:23:30.426290    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
I0513 23:23:30.456942    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0513 23:23:30.487118    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
I0513 23:23:30.520944    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0513 23:23:30.564149    6944 ssh_runner.go:195] Run: openssl version
I0513 23:23:30.581526    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
I0513 23:23:30.613405    6944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
I0513 23:23:30.619453    6944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
I0513 23:23:30.628184    6944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
I0513 23:23:30.645733    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
I0513 23:23:30.677254    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0513 23:23:30.708279    6944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0513 23:23:30.718162    6944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
I0513 23:23:30.726931    6944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0513 23:23:30.744725    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0513 23:23:30.772167    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
I0513 23:23:30.800631    6944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
I0513 23:23:30.806990    6944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
I0513 23:23:30.816157    6944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
I0513 23:23:30.831934    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
I0513 23:23:30.857775    6944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0513 23:23:30.876211    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0513 23:23:30.893532    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0513 23:23:30.910303    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0513 23:23:30.927180    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0513 23:23:30.945313    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0513 23:23:30.962918    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0513 23:23:30.971176    6944 kubeadm.go:928] updating node {m02 172.23.108.85 8443 v1.30.0 docker true true} ...
I0513 23:23:30.971176    6944 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-586300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.108.85

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0513 23:23:30.971176    6944 kube-vip.go:115] generating kube-vip config ...
I0513 23:23:30.983192    6944 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0513 23:23:31.008593    6944 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0513 23:23:31.008853    6944 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 172.23.111.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0513 23:23:31.019720    6944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
I0513 23:23:31.040545    6944 binaries.go:44] Found k8s binaries, skipping transfer
I0513 23:23:31.048508    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I0513 23:23:31.066078    6944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I0513 23:23:31.096784    6944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0513 23:23:31.125110    6944 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
I0513 23:23:31.164527    6944 ssh_runner.go:195] Run: grep 172.23.111.254	control-plane.minikube.internal$ /etc/hosts
I0513 23:23:31.169973    6944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0513 23:23:31.199101    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0513 23:23:31.402186    6944 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0513 23:23:31.438001    6944 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.108.85 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0513 23:23:31.438119    6944 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
I0513 23:23:31.441957    6944 out.go:177] * Verifying Kubernetes components...
I0513 23:23:31.444153    6944 out.go:177] * Enabled addons: 
I0513 23:23:31.438307    6944 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 23:23:31.445527    6944 addons.go:505] duration metric: took 7.407ms for enable addons: enabled=[]
I0513 23:23:31.460612    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0513 23:23:31.685740    6944 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0513 23:23:31.714086    6944 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
I0513 23:23:31.714712    6944 kapi.go:59] client config for ha-586300: &rest.Config{Host:"https://172.23.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W0513 23:23:31.714852    6944 kubeadm.go:477] Overriding stale ClientConfig host https://172.23.111.254:8443 with https://172.23.102.229:8443
I0513 23:23:31.715749    6944 cert_rotation.go:137] Starting client certificate rotation controller
I0513 23:23:31.715802    6944 node_ready.go:35] waiting up to 6m0s for node "ha-586300-m02" to be "Ready" ...
I0513 23:23:31.715802    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:31.715802    6944 round_trippers.go:469] Request Headers:
I0513 23:23:31.715802    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:31.715802    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:31.730952    6944 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
I0513 23:23:32.230267    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:32.230396    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.230396    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.230396    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.234752    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:32.235918    6944 node_ready.go:49] node "ha-586300-m02" has status "Ready":"True"
I0513 23:23:32.235918    6944 node_ready.go:38] duration metric: took 520.09ms for node "ha-586300-m02" to be "Ready" ...
I0513 23:23:32.235918    6944 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0513 23:23:32.235918    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
I0513 23:23:32.235918    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.235918    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.235918    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.247320    6944 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0513 23:23:32.260001    6944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
I0513 23:23:32.260001    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4qbhd
I0513 23:23:32.260001    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.260001    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.260001    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.263689    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:32.264914    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
I0513 23:23:32.264914    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.264914    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.264914    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.269172    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:32.269522    6944 pod_ready.go:92] pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace has status "Ready":"True"
I0513 23:23:32.269522    6944 pod_ready.go:81] duration metric: took 9.5205ms for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
I0513 23:23:32.269522    6944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
I0513 23:23:32.269522    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wj8z7
I0513 23:23:32.270056    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.270056    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.270144    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.273467    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:32.274789    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
I0513 23:23:32.274789    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.274789    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.274789    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.278261    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:32.279067    6944 pod_ready.go:92] pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace has status "Ready":"True"
I0513 23:23:32.279067    6944 pod_ready.go:81] duration metric: took 9.5445ms for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
I0513 23:23:32.279067    6944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
I0513 23:23:32.279190    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300
I0513 23:23:32.279190    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.279190    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.279190    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.286484    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:23:32.287470    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
I0513 23:23:32.287470    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.287470    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.287470    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.291259    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:32.292273    6944 pod_ready.go:92] pod "etcd-ha-586300" in "kube-system" namespace has status "Ready":"True"
I0513 23:23:32.292273    6944 pod_ready.go:81] duration metric: took 13.1399ms for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
I0513 23:23:32.292273    6944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
I0513 23:23:32.292273    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:32.292273    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.292273    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.292273    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.296712    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:32.296712    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:32.297713    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.297713    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.297713    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.301913    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:32.796266    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:32.796319    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.796319    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.796319    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.800507    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:32.802152    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:32.802152    6944 round_trippers.go:469] Request Headers:
I0513 23:23:32.802152    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:32.802246    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:32.817113    6944 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
I0513 23:23:33.303381    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:33.303381    6944 round_trippers.go:469] Request Headers:
I0513 23:23:33.303465    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:33.303465    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:33.311934    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:23:33.313094    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:33.313094    6944 round_trippers.go:469] Request Headers:
I0513 23:23:33.313094    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:33.313637    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:33.319328    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:33.795738    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:33.795738    6944 round_trippers.go:469] Request Headers:
I0513 23:23:33.795738    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:33.795738    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:33.800744    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:33.803675    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:33.803768    6944 round_trippers.go:469] Request Headers:
I0513 23:23:33.803830    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:33.803830    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:33.808743    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:34.303622    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:34.303622    6944 round_trippers.go:469] Request Headers:
I0513 23:23:34.303622    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:34.303622    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:34.319837    6944 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
I0513 23:23:34.320737    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:34.321271    6944 round_trippers.go:469] Request Headers:
I0513 23:23:34.321271    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:34.321271    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:34.326105    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:34.327096    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:34.795220    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:34.795220    6944 round_trippers.go:469] Request Headers:
I0513 23:23:34.795220    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:34.795220    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:34.799302    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:34.800729    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:34.800729    6944 round_trippers.go:469] Request Headers:
I0513 23:23:34.800729    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:34.800785    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:34.803972    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:35.303475    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:35.303535    6944 round_trippers.go:469] Request Headers:
I0513 23:23:35.303664    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:35.303749    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:35.316928    6944 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
I0513 23:23:35.318421    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:35.318482    6944 round_trippers.go:469] Request Headers:
I0513 23:23:35.318482    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:35.318482    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:35.341166    6944 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
I0513 23:23:35.796752    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:35.796752    6944 round_trippers.go:469] Request Headers:
I0513 23:23:35.796752    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:35.796752    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:35.840717    6944 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
I0513 23:23:35.841695    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:35.841695    6944 round_trippers.go:469] Request Headers:
I0513 23:23:35.841695    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:35.841695    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:35.847299    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:36.305318    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:36.305318    6944 round_trippers.go:469] Request Headers:
I0513 23:23:36.305415    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:36.305415    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:36.313751    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:23:36.314995    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:36.314995    6944 round_trippers.go:469] Request Headers:
I0513 23:23:36.314995    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:36.314995    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:36.320177    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:36.805415    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:36.805415    6944 round_trippers.go:469] Request Headers:
I0513 23:23:36.805415    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:36.805415    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:36.809813    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:36.811110    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:36.811175    6944 round_trippers.go:469] Request Headers:
I0513 23:23:36.811175    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:36.811175    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:36.815374    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:36.815374    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:37.293065    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:37.293065    6944 round_trippers.go:469] Request Headers:
I0513 23:23:37.293163    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:37.293163    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:37.298340    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:37.299774    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:37.299774    6944 round_trippers.go:469] Request Headers:
I0513 23:23:37.299774    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:37.299774    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:37.305382    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:37.807049    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:37.807049    6944 round_trippers.go:469] Request Headers:
I0513 23:23:37.807274    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:37.807401    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:37.812681    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:37.815005    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:37.815070    6944 round_trippers.go:469] Request Headers:
I0513 23:23:37.815070    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:37.815070    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:37.819241    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:38.293206    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:38.293206    6944 round_trippers.go:469] Request Headers:
I0513 23:23:38.293206    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:38.293206    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:38.297787    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:38.298953    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:38.298953    6944 round_trippers.go:469] Request Headers:
I0513 23:23:38.298953    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:38.298953    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:38.304240    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:38.793580    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:38.793716    6944 round_trippers.go:469] Request Headers:
I0513 23:23:38.793783    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:38.793783    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:38.799546    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:38.800379    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:38.800467    6944 round_trippers.go:469] Request Headers:
I0513 23:23:38.800467    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:38.800467    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:38.804149    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:39.297375    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:39.297762    6944 round_trippers.go:469] Request Headers:
I0513 23:23:39.297762    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:39.297762    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:39.302091    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:39.303981    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:39.303981    6944 round_trippers.go:469] Request Headers:
I0513 23:23:39.303981    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:39.303981    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:39.309015    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:39.309015    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:39.798492    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:39.798638    6944 round_trippers.go:469] Request Headers:
I0513 23:23:39.798638    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:39.798638    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:39.802292    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:39.804144    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:39.804232    6944 round_trippers.go:469] Request Headers:
I0513 23:23:39.804232    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:39.804232    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:39.808475    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:40.298209    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:40.298209    6944 round_trippers.go:469] Request Headers:
I0513 23:23:40.298209    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:40.298209    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:40.304958    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:23:40.306073    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:40.306130    6944 round_trippers.go:469] Request Headers:
I0513 23:23:40.306130    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:40.306130    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:40.315684    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0513 23:23:40.797839    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:40.797839    6944 round_trippers.go:469] Request Headers:
I0513 23:23:40.798011    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:40.798011    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:40.804653    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:23:40.806653    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:40.806653    6944 round_trippers.go:469] Request Headers:
I0513 23:23:40.806653    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:40.806653    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:40.810206    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:41.294040    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:41.294040    6944 round_trippers.go:469] Request Headers:
I0513 23:23:41.294040    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:41.294040    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:41.299082    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:41.300436    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:41.300436    6944 round_trippers.go:469] Request Headers:
I0513 23:23:41.300436    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:41.300436    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:41.305080    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:41.793733    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:41.793839    6944 round_trippers.go:469] Request Headers:
I0513 23:23:41.793839    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:41.793839    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:41.797688    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:41.799983    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:41.800084    6944 round_trippers.go:469] Request Headers:
I0513 23:23:41.800084    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:41.800084    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:41.806552    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:23:41.807571    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:42.294141    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:42.294350    6944 round_trippers.go:469] Request Headers:
I0513 23:23:42.294350    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:42.294350    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:42.307262    6944 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
I0513 23:23:42.310323    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:42.310323    6944 round_trippers.go:469] Request Headers:
I0513 23:23:42.310323    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:42.310323    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:42.315267    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:42.794043    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:42.794447    6944 round_trippers.go:469] Request Headers:
I0513 23:23:42.794447    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:42.794447    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:42.799705    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:42.801103    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:42.801103    6944 round_trippers.go:469] Request Headers:
I0513 23:23:42.801103    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:42.801103    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:42.805673    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:43.297009    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:43.297009    6944 round_trippers.go:469] Request Headers:
I0513 23:23:43.297009    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:43.297009    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:43.301664    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:43.303395    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:43.303465    6944 round_trippers.go:469] Request Headers:
I0513 23:23:43.303465    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:43.303465    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:43.307532    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:43.798658    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:43.798731    6944 round_trippers.go:469] Request Headers:
I0513 23:23:43.798731    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:43.798731    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:43.803138    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:43.804830    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:43.804902    6944 round_trippers.go:469] Request Headers:
I0513 23:23:43.804902    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:43.804902    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:43.810085    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:43.810912    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:44.297762    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:44.297818    6944 round_trippers.go:469] Request Headers:
I0513 23:23:44.297841    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:44.297841    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:44.312872    6944 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
I0513 23:23:44.314199    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:44.314199    6944 round_trippers.go:469] Request Headers:
I0513 23:23:44.314199    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:44.314199    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:44.322806    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:23:44.798469    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:44.798582    6944 round_trippers.go:469] Request Headers:
I0513 23:23:44.798582    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:44.798582    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:44.802632    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:44.804255    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:44.804255    6944 round_trippers.go:469] Request Headers:
I0513 23:23:44.804255    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:44.804255    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:44.809181    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:45.302639    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:45.302922    6944 round_trippers.go:469] Request Headers:
I0513 23:23:45.302922    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:45.302922    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:45.310376    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:23:45.311644    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:45.311644    6944 round_trippers.go:469] Request Headers:
I0513 23:23:45.311644    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:45.311644    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:45.316477    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:45.805349    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:45.805349    6944 round_trippers.go:469] Request Headers:
I0513 23:23:45.805349    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:45.805349    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:45.811314    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:45.812739    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:45.812801    6944 round_trippers.go:469] Request Headers:
I0513 23:23:45.812801    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:45.812801    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:45.818012    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:45.819206    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:46.305362    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:46.305362    6944 round_trippers.go:469] Request Headers:
I0513 23:23:46.305362    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:46.305362    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:46.308894    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:46.310941    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:46.311001    6944 round_trippers.go:469] Request Headers:
I0513 23:23:46.311059    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:46.311059    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:46.315821    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:46.805934    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:46.805934    6944 round_trippers.go:469] Request Headers:
I0513 23:23:46.805934    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:46.805934    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:46.810599    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:46.811823    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:46.811823    6944 round_trippers.go:469] Request Headers:
I0513 23:23:46.811886    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:46.811886    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:46.815711    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:47.308664    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:47.308758    6944 round_trippers.go:469] Request Headers:
I0513 23:23:47.308758    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:47.308758    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:47.316735    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:23:47.317666    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:47.317666    6944 round_trippers.go:469] Request Headers:
I0513 23:23:47.317767    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:47.317767    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:47.321991    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:47.794332    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:47.794332    6944 round_trippers.go:469] Request Headers:
I0513 23:23:47.794332    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:47.794332    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:47.800386    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:47.801179    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:47.801179    6944 round_trippers.go:469] Request Headers:
I0513 23:23:47.801179    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:47.801179    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:47.805251    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:48.294687    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:48.294687    6944 round_trippers.go:469] Request Headers:
I0513 23:23:48.294687    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:48.294687    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:48.299712    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:48.301026    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:48.301182    6944 round_trippers.go:469] Request Headers:
I0513 23:23:48.301182    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:48.301182    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:48.306357    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:48.307228    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:48.795569    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:48.795569    6944 round_trippers.go:469] Request Headers:
I0513 23:23:48.795569    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:48.795569    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:48.800742    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:48.801458    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:48.801458    6944 round_trippers.go:469] Request Headers:
I0513 23:23:48.801458    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:48.801458    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:48.806016    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:49.297278    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:49.297362    6944 round_trippers.go:469] Request Headers:
I0513 23:23:49.297362    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:49.297362    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:49.301673    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:49.303203    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:49.303203    6944 round_trippers.go:469] Request Headers:
I0513 23:23:49.303203    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:49.303203    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:49.307373    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:49.796100    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:49.796212    6944 round_trippers.go:469] Request Headers:
I0513 23:23:49.796212    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:49.796212    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:49.801611    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:49.802328    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:49.802328    6944 round_trippers.go:469] Request Headers:
I0513 23:23:49.802328    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:49.802436    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:49.806479    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:50.296357    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:50.296357    6944 round_trippers.go:469] Request Headers:
I0513 23:23:50.296357    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:50.296357    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:50.303926    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:23:50.305429    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:50.305489    6944 round_trippers.go:469] Request Headers:
I0513 23:23:50.305489    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:50.305489    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:50.313005    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:23:50.313358    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:50.798938    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:50.798938    6944 round_trippers.go:469] Request Headers:
I0513 23:23:50.798938    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:50.798938    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:50.806350    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:23:50.807298    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:50.807298    6944 round_trippers.go:469] Request Headers:
I0513 23:23:50.807298    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:50.807298    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:50.810608    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:51.301060    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:51.301060    6944 round_trippers.go:469] Request Headers:
I0513 23:23:51.301060    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:51.301217    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:51.305669    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:51.306801    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:51.306801    6944 round_trippers.go:469] Request Headers:
I0513 23:23:51.306801    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:51.306858    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:51.311432    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:51.802743    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:51.803028    6944 round_trippers.go:469] Request Headers:
I0513 23:23:51.803142    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:51.803142    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:51.809772    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:23:51.811049    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:51.811049    6944 round_trippers.go:469] Request Headers:
I0513 23:23:51.811049    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:51.811049    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:51.814868    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:52.301210    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:52.301210    6944 round_trippers.go:469] Request Headers:
I0513 23:23:52.301210    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:52.301210    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:52.310317    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0513 23:23:52.312092    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:52.312162    6944 round_trippers.go:469] Request Headers:
I0513 23:23:52.312162    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:52.312162    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:52.316006    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:52.316765    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:52.804067    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:52.804607    6944 round_trippers.go:469] Request Headers:
I0513 23:23:52.804607    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:52.804607    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:52.810593    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:52.811614    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:52.811614    6944 round_trippers.go:469] Request Headers:
I0513 23:23:52.811614    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:52.811614    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:52.817192    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:53.302478    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:53.302478    6944 round_trippers.go:469] Request Headers:
I0513 23:23:53.302478    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:53.302478    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:53.308996    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:23:53.309729    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:53.309729    6944 round_trippers.go:469] Request Headers:
I0513 23:23:53.309729    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:53.309729    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:53.314513    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:53.803262    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:53.803349    6944 round_trippers.go:469] Request Headers:
I0513 23:23:53.803349    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:53.803349    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:53.807514    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:53.809439    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:53.809540    6944 round_trippers.go:469] Request Headers:
I0513 23:23:53.809540    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:53.809540    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:53.813785    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:54.294136    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:54.294136    6944 round_trippers.go:469] Request Headers:
I0513 23:23:54.294136    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:54.294136    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:54.302306    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:23:54.303939    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:54.304004    6944 round_trippers.go:469] Request Headers:
I0513 23:23:54.304004    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:54.304004    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:54.313679    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0513 23:23:54.796389    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:54.796692    6944 round_trippers.go:469] Request Headers:
I0513 23:23:54.796692    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:54.796692    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:54.801709    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:54.802304    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:54.802304    6944 round_trippers.go:469] Request Headers:
I0513 23:23:54.802304    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:54.802304    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:54.805967    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:54.807480    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:55.298759    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:55.298759    6944 round_trippers.go:469] Request Headers:
I0513 23:23:55.298759    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:55.298759    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:55.303334    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:55.304795    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:55.304795    6944 round_trippers.go:469] Request Headers:
I0513 23:23:55.304795    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:55.304795    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:55.309611    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:55.801499    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:55.801605    6944 round_trippers.go:469] Request Headers:
I0513 23:23:55.801605    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:55.801605    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:55.806227    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:55.808324    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:55.808393    6944 round_trippers.go:469] Request Headers:
I0513 23:23:55.808393    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:55.808393    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:55.812689    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:56.301290    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:56.301657    6944 round_trippers.go:469] Request Headers:
I0513 23:23:56.301657    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:56.301657    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:56.312970    6944 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0513 23:23:56.314480    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:56.314540    6944 round_trippers.go:469] Request Headers:
I0513 23:23:56.314540    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:56.314540    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:56.323473    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:23:56.801008    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:56.801008    6944 round_trippers.go:469] Request Headers:
I0513 23:23:56.801008    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:56.801008    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:56.807334    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:23:56.808125    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:56.808125    6944 round_trippers.go:469] Request Headers:
I0513 23:23:56.808125    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:56.808125    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:56.813367    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:23:56.814786    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:57.299387    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:57.299387    6944 round_trippers.go:469] Request Headers:
I0513 23:23:57.299387    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:57.299387    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:57.304021    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:57.305372    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:57.305372    6944 round_trippers.go:469] Request Headers:
I0513 23:23:57.305439    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:57.305439    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:57.308632    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:57.799610    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:57.799610    6944 round_trippers.go:469] Request Headers:
I0513 23:23:57.799610    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:57.799610    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:57.803997    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:57.805132    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:57.805132    6944 round_trippers.go:469] Request Headers:
I0513 23:23:57.805226    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:57.805226    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:57.809400    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:58.299794    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:58.299794    6944 round_trippers.go:469] Request Headers:
I0513 23:23:58.299916    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:58.299916    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:58.307639    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:23:58.308995    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:58.309068    6944 round_trippers.go:469] Request Headers:
I0513 23:23:58.309068    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:58.309106    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:58.315522    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:23:58.801403    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:58.801403    6944 round_trippers.go:469] Request Headers:
I0513 23:23:58.801403    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:58.801403    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:58.810456    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0513 23:23:58.811261    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:58.811261    6944 round_trippers.go:469] Request Headers:
I0513 23:23:58.811261    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:58.811261    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:58.815945    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:23:58.817216    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:23:59.302029    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:59.302122    6944 round_trippers.go:469] Request Headers:
I0513 23:23:59.302122    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:59.302122    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:59.308317    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:23:59.309935    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:59.309935    6944 round_trippers.go:469] Request Headers:
I0513 23:23:59.309935    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:59.309935    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:59.313316    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:23:59.802781    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:23:59.803039    6944 round_trippers.go:469] Request Headers:
I0513 23:23:59.803117    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:59.803117    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:59.809251    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:23:59.810224    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:23:59.810224    6944 round_trippers.go:469] Request Headers:
I0513 23:23:59.810224    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:23:59.810224    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:23:59.814218    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:00.298628    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:00.298628    6944 round_trippers.go:469] Request Headers:
I0513 23:24:00.298628    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:00.298628    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:00.304844    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:00.306000    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:00.306000    6944 round_trippers.go:469] Request Headers:
I0513 23:24:00.306060    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:00.306060    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:00.310475    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:00.798655    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:00.798844    6944 round_trippers.go:469] Request Headers:
I0513 23:24:00.798911    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:00.798911    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:00.806087    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:00.807732    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:00.807732    6944 round_trippers.go:469] Request Headers:
I0513 23:24:00.807793    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:00.807793    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:00.810979    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:01.298407    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:01.298407    6944 round_trippers.go:469] Request Headers:
I0513 23:24:01.298407    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:01.298407    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:01.302239    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:01.303843    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:01.303843    6944 round_trippers.go:469] Request Headers:
I0513 23:24:01.303843    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:01.303946    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:01.309107    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:01.309881    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:01.797627    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:01.797627    6944 round_trippers.go:469] Request Headers:
I0513 23:24:01.797627    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:01.797627    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:01.807715    6944 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I0513 23:24:01.808672    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:01.808672    6944 round_trippers.go:469] Request Headers:
I0513 23:24:01.808672    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:01.808734    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:01.812917    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:02.301403    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:02.301504    6944 round_trippers.go:469] Request Headers:
I0513 23:24:02.301504    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:02.301504    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:02.313029    6944 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0513 23:24:02.315043    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:02.315043    6944 round_trippers.go:469] Request Headers:
I0513 23:24:02.315043    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:02.315043    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:02.329886    6944 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
I0513 23:24:02.802058    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:02.802058    6944 round_trippers.go:469] Request Headers:
I0513 23:24:02.802058    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:02.802058    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:02.806616    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:02.808292    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:02.808314    6944 round_trippers.go:469] Request Headers:
I0513 23:24:02.808314    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:02.808314    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:02.813383    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:03.303177    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:03.303177    6944 round_trippers.go:469] Request Headers:
I0513 23:24:03.303177    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:03.303177    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:03.308520    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:03.309978    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:03.310044    6944 round_trippers.go:469] Request Headers:
I0513 23:24:03.310044    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:03.310097    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:03.318154    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:24:03.319260    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:03.806178    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:03.806178    6944 round_trippers.go:469] Request Headers:
I0513 23:24:03.806414    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:03.806414    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:03.810857    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:03.811617    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:03.811617    6944 round_trippers.go:469] Request Headers:
I0513 23:24:03.811684    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:03.811684    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:03.816466    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:04.306492    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:04.306577    6944 round_trippers.go:469] Request Headers:
I0513 23:24:04.306577    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:04.306577    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:04.312032    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:04.314115    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:04.314115    6944 round_trippers.go:469] Request Headers:
I0513 23:24:04.314115    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:04.314115    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:04.319061    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:04.806593    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:04.806665    6944 round_trippers.go:469] Request Headers:
I0513 23:24:04.806738    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:04.806738    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:04.812554    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:04.813825    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:04.813825    6944 round_trippers.go:469] Request Headers:
I0513 23:24:04.813825    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:04.813825    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:04.818351    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:05.305637    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:05.305637    6944 round_trippers.go:469] Request Headers:
I0513 23:24:05.305637    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:05.305637    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:05.311035    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:05.312153    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:05.312255    6944 round_trippers.go:469] Request Headers:
I0513 23:24:05.312255    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:05.312329    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:05.318662    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:05.319648    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:05.805013    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:05.805013    6944 round_trippers.go:469] Request Headers:
I0513 23:24:05.805013    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:05.805013    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:05.812802    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:05.814424    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:05.814424    6944 round_trippers.go:469] Request Headers:
I0513 23:24:05.814424    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:05.814424    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:05.818136    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:06.304280    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:06.304349    6944 round_trippers.go:469] Request Headers:
I0513 23:24:06.304349    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:06.304349    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:06.310157    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:06.311798    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:06.311798    6944 round_trippers.go:469] Request Headers:
I0513 23:24:06.311798    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:06.311798    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:06.315673    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:06.803017    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:06.803086    6944 round_trippers.go:469] Request Headers:
I0513 23:24:06.803086    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:06.803086    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:06.807909    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:06.810019    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:06.810107    6944 round_trippers.go:469] Request Headers:
I0513 23:24:06.810107    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:06.810107    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:06.814139    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:07.302122    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:07.302122    6944 round_trippers.go:469] Request Headers:
I0513 23:24:07.302122    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:07.302122    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:07.306957    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:07.307993    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:07.307993    6944 round_trippers.go:469] Request Headers:
I0513 23:24:07.307993    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:07.307993    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:07.311179    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:07.805133    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:07.805133    6944 round_trippers.go:469] Request Headers:
I0513 23:24:07.805133    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:07.805133    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:07.810272    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:07.811149    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:07.811216    6944 round_trippers.go:469] Request Headers:
I0513 23:24:07.811216    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:07.811216    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:07.818370    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:07.818370    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:08.304913    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:08.304913    6944 round_trippers.go:469] Request Headers:
I0513 23:24:08.304913    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:08.304913    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:08.311582    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:08.312778    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:08.312844    6944 round_trippers.go:469] Request Headers:
I0513 23:24:08.312844    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:08.312844    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:08.316566    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:08.804201    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:08.804259    6944 round_trippers.go:469] Request Headers:
I0513 23:24:08.804327    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:08.804327    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:08.809138    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:08.810774    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:08.810774    6944 round_trippers.go:469] Request Headers:
I0513 23:24:08.810774    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:08.810774    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:08.816806    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:09.305398    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:09.305398    6944 round_trippers.go:469] Request Headers:
I0513 23:24:09.305499    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:09.305499    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:09.311113    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:09.312432    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:09.312432    6944 round_trippers.go:469] Request Headers:
I0513 23:24:09.312504    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:09.312504    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:09.316904    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:09.809170    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:09.809170    6944 round_trippers.go:469] Request Headers:
I0513 23:24:09.809259    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:09.809259    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:09.813904    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:09.815353    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:09.815410    6944 round_trippers.go:469] Request Headers:
I0513 23:24:09.815410    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:09.815410    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:09.819985    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:09.822553    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:10.294659    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:10.294732    6944 round_trippers.go:469] Request Headers:
I0513 23:24:10.294803    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:10.294803    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:10.300801    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:10.302037    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:10.302104    6944 round_trippers.go:469] Request Headers:
I0513 23:24:10.302104    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:10.302172    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:10.305423    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:10.810258    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:10.810334    6944 round_trippers.go:469] Request Headers:
I0513 23:24:10.810334    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:10.810334    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:10.817816    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:10.819053    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:10.819053    6944 round_trippers.go:469] Request Headers:
I0513 23:24:10.819053    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:10.819053    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:10.823653    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:11.296438    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:11.296766    6944 round_trippers.go:469] Request Headers:
I0513 23:24:11.296835    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:11.296835    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:11.303708    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:11.304471    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:11.304471    6944 round_trippers.go:469] Request Headers:
I0513 23:24:11.304471    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:11.304533    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:11.309890    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:11.809710    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:11.809962    6944 round_trippers.go:469] Request Headers:
I0513 23:24:11.809962    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:11.809962    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:11.814112    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:11.815677    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:11.815677    6944 round_trippers.go:469] Request Headers:
I0513 23:24:11.815793    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:11.815793    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:11.819971    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:12.294997    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:12.295108    6944 round_trippers.go:469] Request Headers:
I0513 23:24:12.295108    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:12.295108    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:12.300452    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:12.301986    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:12.302068    6944 round_trippers.go:469] Request Headers:
I0513 23:24:12.302068    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:12.302142    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:12.309688    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:12.312085    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:12.808553    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:12.808636    6944 round_trippers.go:469] Request Headers:
I0513 23:24:12.808636    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:12.808636    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:12.813748    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:12.815407    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:12.815407    6944 round_trippers.go:469] Request Headers:
I0513 23:24:12.815407    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:12.815407    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:12.821149    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:13.296194    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:13.296194    6944 round_trippers.go:469] Request Headers:
I0513 23:24:13.296194    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:13.296194    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:13.304804    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:24:13.305887    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:13.305951    6944 round_trippers.go:469] Request Headers:
I0513 23:24:13.305951    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:13.305951    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:13.310330    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:13.803998    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:13.804190    6944 round_trippers.go:469] Request Headers:
I0513 23:24:13.804247    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:13.804247    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:13.808510    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:13.810602    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:13.810642    6944 round_trippers.go:469] Request Headers:
I0513 23:24:13.810683    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:13.810683    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:13.815970    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:14.298934    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:14.298996    6944 round_trippers.go:469] Request Headers:
I0513 23:24:14.298996    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:14.298996    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:14.307500    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:24:14.308640    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:14.308640    6944 round_trippers.go:469] Request Headers:
I0513 23:24:14.308695    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:14.308695    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:14.314554    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:14.315388    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:14.805048    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:14.805271    6944 round_trippers.go:469] Request Headers:
I0513 23:24:14.805271    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:14.805367    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:14.810231    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:14.811458    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:14.811549    6944 round_trippers.go:469] Request Headers:
I0513 23:24:14.811549    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:14.811626    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:14.819054    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:15.300862    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:15.300862    6944 round_trippers.go:469] Request Headers:
I0513 23:24:15.300862    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:15.300958    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:15.304991    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:15.306987    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:15.307063    6944 round_trippers.go:469] Request Headers:
I0513 23:24:15.307063    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:15.307063    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:15.311424    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:15.800000    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:15.800000    6944 round_trippers.go:469] Request Headers:
I0513 23:24:15.800189    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:15.800189    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:15.804400    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:15.806370    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:15.806444    6944 round_trippers.go:469] Request Headers:
I0513 23:24:15.806444    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:15.806444    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:15.810054    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:16.299648    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:16.299724    6944 round_trippers.go:469] Request Headers:
I0513 23:24:16.299724    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:16.299724    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:16.305506    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:16.306928    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:16.307029    6944 round_trippers.go:469] Request Headers:
I0513 23:24:16.307029    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:16.307029    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:16.315305    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:24:16.798720    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:16.798838    6944 round_trippers.go:469] Request Headers:
I0513 23:24:16.798838    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:16.798838    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:16.802956    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:16.804864    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:16.804864    6944 round_trippers.go:469] Request Headers:
I0513 23:24:16.804864    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:16.804864    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:16.809027    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:16.810894    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:17.303961    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:17.304018    6944 round_trippers.go:469] Request Headers:
I0513 23:24:17.304018    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:17.304018    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:17.310743    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:17.311505    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:17.311505    6944 round_trippers.go:469] Request Headers:
I0513 23:24:17.311505    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:17.311627    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:17.316036    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:17.796644    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:17.796644    6944 round_trippers.go:469] Request Headers:
I0513 23:24:17.796644    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:17.796644    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:17.800813    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:17.802509    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:17.802509    6944 round_trippers.go:469] Request Headers:
I0513 23:24:17.802509    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:17.802509    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:17.806076    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:18.300968    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:18.300968    6944 round_trippers.go:469] Request Headers:
I0513 23:24:18.301064    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:18.301064    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:18.309174    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:24:18.311038    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:18.311104    6944 round_trippers.go:469] Request Headers:
I0513 23:24:18.311104    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:18.311104    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:18.315719    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:18.802455    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:18.802455    6944 round_trippers.go:469] Request Headers:
I0513 23:24:18.802455    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:18.802561    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:18.807164    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:18.808942    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:18.809027    6944 round_trippers.go:469] Request Headers:
I0513 23:24:18.809027    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:18.809027    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:18.812449    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:18.813914    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:19.303892    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:19.304071    6944 round_trippers.go:469] Request Headers:
I0513 23:24:19.304071    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:19.304071    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:19.310096    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:19.311098    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:19.311098    6944 round_trippers.go:469] Request Headers:
I0513 23:24:19.311098    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:19.311098    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:19.316270    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:19.802353    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:19.802800    6944 round_trippers.go:469] Request Headers:
I0513 23:24:19.802800    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:19.802800    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:19.808288    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:19.808762    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:19.808762    6944 round_trippers.go:469] Request Headers:
I0513 23:24:19.808762    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:19.808762    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:19.816081    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:20.299394    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:20.299469    6944 round_trippers.go:469] Request Headers:
I0513 23:24:20.299469    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:20.299469    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:20.305827    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:20.307267    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:20.307267    6944 round_trippers.go:469] Request Headers:
I0513 23:24:20.307267    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:20.307267    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:20.315716    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:24:20.798553    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:20.798553    6944 round_trippers.go:469] Request Headers:
I0513 23:24:20.798553    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:20.798553    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:20.802121    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:20.803475    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:20.803475    6944 round_trippers.go:469] Request Headers:
I0513 23:24:20.803475    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:20.803475    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:20.806706    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:21.297883    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:21.297956    6944 round_trippers.go:469] Request Headers:
I0513 23:24:21.298028    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:21.298028    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:21.302258    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:21.303953    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:21.304051    6944 round_trippers.go:469] Request Headers:
I0513 23:24:21.304051    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:21.304051    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:21.308358    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:21.309706    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:21.796105    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:21.796105    6944 round_trippers.go:469] Request Headers:
I0513 23:24:21.796105    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:21.796105    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:21.802749    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:21.803520    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:21.803520    6944 round_trippers.go:469] Request Headers:
I0513 23:24:21.803580    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:21.803580    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:21.808425    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:22.298805    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:22.298871    6944 round_trippers.go:469] Request Headers:
I0513 23:24:22.298871    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:22.298871    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:22.303347    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:22.304639    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:22.304639    6944 round_trippers.go:469] Request Headers:
I0513 23:24:22.304639    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:22.304639    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:22.312400    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:22.796971    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:22.796971    6944 round_trippers.go:469] Request Headers:
I0513 23:24:22.796971    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:22.796971    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:22.802331    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:22.803583    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:22.803583    6944 round_trippers.go:469] Request Headers:
I0513 23:24:22.803583    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:22.803583    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:22.806880    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:23.309256    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:23.309256    6944 round_trippers.go:469] Request Headers:
I0513 23:24:23.309256    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:23.309256    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:23.314767    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:23.315547    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:23.315618    6944 round_trippers.go:469] Request Headers:
I0513 23:24:23.315618    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:23.315618    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:23.320504    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:23.321684    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:23.808853    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:23.808853    6944 round_trippers.go:469] Request Headers:
I0513 23:24:23.808853    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:23.808853    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:23.814415    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:23.815413    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:23.815413    6944 round_trippers.go:469] Request Headers:
I0513 23:24:23.815413    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:23.815413    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:23.819772    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:24.310333    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:24.310399    6944 round_trippers.go:469] Request Headers:
I0513 23:24:24.310399    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:24.310399    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:24.315373    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:24.317030    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:24.317030    6944 round_trippers.go:469] Request Headers:
I0513 23:24:24.317030    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:24.317030    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:24.330673    6944 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
I0513 23:24:24.795691    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:24.795770    6944 round_trippers.go:469] Request Headers:
I0513 23:24:24.795770    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:24.795770    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:24.800215    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:24.801932    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:24.802003    6944 round_trippers.go:469] Request Headers:
I0513 23:24:24.802003    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:24.802003    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:24.806143    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:25.299997    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:25.300088    6944 round_trippers.go:469] Request Headers:
I0513 23:24:25.300088    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:25.300088    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:25.304556    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:25.306300    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:25.306300    6944 round_trippers.go:469] Request Headers:
I0513 23:24:25.306300    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:25.306300    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:25.310677    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:25.798942    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:25.798942    6944 round_trippers.go:469] Request Headers:
I0513 23:24:25.798942    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:25.798942    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:25.803412    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:25.805429    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:25.805527    6944 round_trippers.go:469] Request Headers:
I0513 23:24:25.805527    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:25.805599    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:25.808976    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:25.810299    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:26.300365    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:26.300442    6944 round_trippers.go:469] Request Headers:
I0513 23:24:26.300505    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:26.300505    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:26.305408    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:26.306759    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:26.306759    6944 round_trippers.go:469] Request Headers:
I0513 23:24:26.306834    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:26.306834    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:26.310587    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:26.799440    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:26.799440    6944 round_trippers.go:469] Request Headers:
I0513 23:24:26.799440    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:26.799440    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:26.808038    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:24:26.809703    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:26.809783    6944 round_trippers.go:469] Request Headers:
I0513 23:24:26.809783    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:26.809783    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:26.814091    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:27.297851    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:27.297851    6944 round_trippers.go:469] Request Headers:
I0513 23:24:27.297851    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:27.297851    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:27.302623    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:27.305065    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:27.305065    6944 round_trippers.go:469] Request Headers:
I0513 23:24:27.305139    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:27.305139    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:27.310447    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:27.809644    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:27.809644    6944 round_trippers.go:469] Request Headers:
I0513 23:24:27.809870    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:27.809870    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:27.814361    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:27.815182    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:27.815182    6944 round_trippers.go:469] Request Headers:
I0513 23:24:27.815248    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:27.815248    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:27.818955    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:27.820099    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:28.297764    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:28.297874    6944 round_trippers.go:469] Request Headers:
I0513 23:24:28.297874    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:28.297874    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:28.302241    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:28.303860    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:28.303860    6944 round_trippers.go:469] Request Headers:
I0513 23:24:28.303860    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:28.303860    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:28.310670    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:28.801630    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:28.801743    6944 round_trippers.go:469] Request Headers:
I0513 23:24:28.801743    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:28.801743    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:28.806294    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:28.807959    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:28.807959    6944 round_trippers.go:469] Request Headers:
I0513 23:24:28.807959    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:28.807959    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:28.811319    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:29.302555    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:29.302555    6944 round_trippers.go:469] Request Headers:
I0513 23:24:29.302648    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:29.302648    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:29.307472    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:29.308817    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:29.308817    6944 round_trippers.go:469] Request Headers:
I0513 23:24:29.308817    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:29.308817    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:29.313194    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:29.801140    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:29.801218    6944 round_trippers.go:469] Request Headers:
I0513 23:24:29.801218    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:29.801218    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:29.806848    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:29.807997    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:29.808119    6944 round_trippers.go:469] Request Headers:
I0513 23:24:29.808119    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:29.808119    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:29.815887    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:30.299685    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:30.299685    6944 round_trippers.go:469] Request Headers:
I0513 23:24:30.299791    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:30.299791    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:30.307143    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:30.308546    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:30.308602    6944 round_trippers.go:469] Request Headers:
I0513 23:24:30.308602    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:30.308602    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:30.312804    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:30.313510    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:30.798765    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:30.798926    6944 round_trippers.go:469] Request Headers:
I0513 23:24:30.798926    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:30.798926    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:30.803723    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:30.804420    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:30.804523    6944 round_trippers.go:469] Request Headers:
I0513 23:24:30.804523    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:30.804523    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:30.809121    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:31.295648    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:31.295648    6944 round_trippers.go:469] Request Headers:
I0513 23:24:31.295648    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:31.295648    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:31.299576    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:31.300743    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:31.300743    6944 round_trippers.go:469] Request Headers:
I0513 23:24:31.300828    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:31.300828    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:31.304049    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:31.799839    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:31.800220    6944 round_trippers.go:469] Request Headers:
I0513 23:24:31.800220    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:31.800220    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:31.804497    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:31.805561    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:31.805561    6944 round_trippers.go:469] Request Headers:
I0513 23:24:31.805561    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:31.805561    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:31.810605    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:32.299027    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:32.299027    6944 round_trippers.go:469] Request Headers:
I0513 23:24:32.299027    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:32.299027    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:32.307065    6944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0513 23:24:32.307625    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:32.307625    6944 round_trippers.go:469] Request Headers:
I0513 23:24:32.307625    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:32.307625    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:32.312204    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:32.800429    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:32.800547    6944 round_trippers.go:469] Request Headers:
I0513 23:24:32.800547    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:32.800547    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:32.826392    6944 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
I0513 23:24:32.829193    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:32.829263    6944 round_trippers.go:469] Request Headers:
I0513 23:24:32.829263    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:32.829263    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:32.836379    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:32.837453    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:33.299761    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:33.299761    6944 round_trippers.go:469] Request Headers:
I0513 23:24:33.299848    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:33.299848    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:33.304798    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:33.305653    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:33.305653    6944 round_trippers.go:469] Request Headers:
I0513 23:24:33.305723    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:33.305723    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:33.309418    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:33.801608    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:33.801608    6944 round_trippers.go:469] Request Headers:
I0513 23:24:33.801608    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:33.801608    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:33.806167    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:33.807379    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:33.807379    6944 round_trippers.go:469] Request Headers:
I0513 23:24:33.807379    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:33.807379    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:33.813976    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:34.303669    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:34.303732    6944 round_trippers.go:469] Request Headers:
I0513 23:24:34.303732    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:34.303788    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:34.313132    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0513 23:24:34.314216    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:34.314216    6944 round_trippers.go:469] Request Headers:
I0513 23:24:34.314216    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:34.314216    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:34.320644    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:34.806343    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:34.806524    6944 round_trippers.go:469] Request Headers:
I0513 23:24:34.806524    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:34.806524    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:34.810951    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:34.812377    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:34.812377    6944 round_trippers.go:469] Request Headers:
I0513 23:24:34.812377    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:34.812377    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:34.818031    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:35.306779    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:35.306779    6944 round_trippers.go:469] Request Headers:
I0513 23:24:35.306779    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:35.306779    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:35.310892    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:35.313076    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:35.313140    6944 round_trippers.go:469] Request Headers:
I0513 23:24:35.313140    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:35.313140    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:35.316257    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:35.317243    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:35.807010    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:35.807010    6944 round_trippers.go:469] Request Headers:
I0513 23:24:35.807010    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:35.807010    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:35.811348    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:35.812484    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:35.812636    6944 round_trippers.go:469] Request Headers:
I0513 23:24:35.812636    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:35.812636    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:35.816958    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:36.309280    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:36.309345    6944 round_trippers.go:469] Request Headers:
I0513 23:24:36.309412    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:36.309412    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:36.313968    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:36.315369    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:36.315369    6944 round_trippers.go:469] Request Headers:
I0513 23:24:36.315369    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:36.315369    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:36.320096    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:36.810076    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:36.810394    6944 round_trippers.go:469] Request Headers:
I0513 23:24:36.810394    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:36.810394    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:36.815445    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:36.815445    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:36.815445    6944 round_trippers.go:469] Request Headers:
I0513 23:24:36.816639    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:36.816639    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:36.820872    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:37.309407    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:37.309407    6944 round_trippers.go:469] Request Headers:
I0513 23:24:37.309407    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:37.309407    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:37.314978    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:37.315750    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:37.315750    6944 round_trippers.go:469] Request Headers:
I0513 23:24:37.315750    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:37.315750    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:37.319616    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:37.320360    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
I0513 23:24:37.809720    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:37.809720    6944 round_trippers.go:469] Request Headers:
I0513 23:24:37.809720    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:37.809720    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:37.815014    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:37.816081    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:37.816081    6944 round_trippers.go:469] Request Headers:
I0513 23:24:37.816177    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:37.816177    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:37.820420    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:38.297198    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:38.297439    6944 round_trippers.go:469] Request Headers:
I0513 23:24:38.297439    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:38.297439    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:38.302712    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:38.304074    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:38.304074    6944 round_trippers.go:469] Request Headers:
I0513 23:24:38.304185    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:38.304185    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:38.312420    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:38.800395    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:38.800461    6944 round_trippers.go:469] Request Headers:
I0513 23:24:38.800461    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:38.800461    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:38.806641    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:38.807815    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:38.807815    6944 round_trippers.go:469] Request Headers:
I0513 23:24:38.807866    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:38.807866    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:38.811528    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0513 23:24:39.303042    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:39.303042    6944 round_trippers.go:469] Request Headers:
I0513 23:24:39.303042    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:39.303042    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:39.308570    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0513 23:24:39.309655    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:39.309725    6944 round_trippers.go:469] Request Headers:
I0513 23:24:39.309725    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:39.309725    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:39.316985    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0513 23:24:39.805299    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
I0513 23:24:39.805299    6944 round_trippers.go:469] Request Headers:
I0513 23:24:39.805299    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:39.805381    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:39.809389    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0513 23:24:39.811275    6944 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
I0513 23:24:39.811275    6944 round_trippers.go:469] Request Headers:
I0513 23:24:39.811379    6944 round_trippers.go:473]     Accept: application/json, */*
I0513 23:24:39.811379    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0513 23:24:39.817610    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0513 23:24:39.818582    6944 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-586300 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: context deadline exceeded (68.2µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: context deadline exceeded (119µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-586300 -n ha-586300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-586300 -n ha-586300: (11.0677702s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 logs -n 25: (7.7412613s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-586300 ssh -n                                                                                                          | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:16 UTC | 13 May 24 23:16 UTC |
	|         | ha-586300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt                                                                       | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:16 UTC | 13 May 24 23:16 UTC |
	|         | ha-586300:/home/docker/cp-test_ha-586300-m03_ha-586300.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n                                                                                                          | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:16 UTC | 13 May 24 23:16 UTC |
	|         | ha-586300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n ha-586300 sudo cat                                                                                       | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:16 UTC | 13 May 24 23:16 UTC |
	|         | /home/docker/cp-test_ha-586300-m03_ha-586300.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt                                                                       | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:16 UTC | 13 May 24 23:17 UTC |
	|         | ha-586300-m02:/home/docker/cp-test_ha-586300-m03_ha-586300-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n                                                                                                          | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:17 UTC | 13 May 24 23:17 UTC |
	|         | ha-586300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n ha-586300-m02 sudo cat                                                                                   | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:17 UTC | 13 May 24 23:17 UTC |
	|         | /home/docker/cp-test_ha-586300-m03_ha-586300-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt                                                                       | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:17 UTC | 13 May 24 23:17 UTC |
	|         | ha-586300-m04:/home/docker/cp-test_ha-586300-m03_ha-586300-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n                                                                                                          | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:17 UTC | 13 May 24 23:17 UTC |
	|         | ha-586300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n ha-586300-m04 sudo cat                                                                                   | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:17 UTC | 13 May 24 23:17 UTC |
	|         | /home/docker/cp-test_ha-586300-m03_ha-586300-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-586300 cp testdata\cp-test.txt                                                                                         | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:17 UTC | 13 May 24 23:18 UTC |
	|         | ha-586300-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n                                                                                                          | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:18 UTC | 13 May 24 23:18 UTC |
	|         | ha-586300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt                                                                       | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:18 UTC | 13 May 24 23:18 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3745865926\001\cp-test_ha-586300-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n                                                                                                          | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:18 UTC | 13 May 24 23:18 UTC |
	|         | ha-586300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt                                                                       | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:18 UTC | 13 May 24 23:18 UTC |
	|         | ha-586300:/home/docker/cp-test_ha-586300-m04_ha-586300.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n                                                                                                          | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:18 UTC | 13 May 24 23:18 UTC |
	|         | ha-586300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n ha-586300 sudo cat                                                                                       | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:18 UTC | 13 May 24 23:19 UTC |
	|         | /home/docker/cp-test_ha-586300-m04_ha-586300.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt                                                                       | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:19 UTC | 13 May 24 23:19 UTC |
	|         | ha-586300-m02:/home/docker/cp-test_ha-586300-m04_ha-586300-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n                                                                                                          | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:19 UTC | 13 May 24 23:19 UTC |
	|         | ha-586300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n ha-586300-m02 sudo cat                                                                                   | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:19 UTC | 13 May 24 23:19 UTC |
	|         | /home/docker/cp-test_ha-586300-m04_ha-586300-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt                                                                       | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:19 UTC | 13 May 24 23:19 UTC |
	|         | ha-586300-m03:/home/docker/cp-test_ha-586300-m04_ha-586300-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n                                                                                                          | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:19 UTC | 13 May 24 23:19 UTC |
	|         | ha-586300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-586300 ssh -n ha-586300-m03 sudo cat                                                                                   | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:19 UTC | 13 May 24 23:20 UTC |
	|         | /home/docker/cp-test_ha-586300-m04_ha-586300-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-586300 node stop m02 -v=7                                                                                              | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:20 UTC | 13 May 24 23:20 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	| node    | ha-586300 node start m02 -v=7                                                                                             | ha-586300 | minikube5\jenkins | v1.33.1 | 13 May 24 23:21 UTC |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 22:54:40
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 22:54:40.050723   11992 out.go:291] Setting OutFile to fd 992 ...
	I0513 22:54:40.050723   11992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:54:40.050723   11992 out.go:304] Setting ErrFile to fd 960...
	I0513 22:54:40.051723   11992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:54:40.076723   11992 out.go:298] Setting JSON to false
	I0513 22:54:40.080566   11992 start.go:129] hostinfo: {"hostname":"minikube5","uptime":2443,"bootTime":1715638436,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0513 22:54:40.080685   11992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:54:40.086154   11992 out.go:177] * [ha-586300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:54:40.089904   11992 notify.go:220] Checking for updates...
	I0513 22:54:40.092370   11992 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:54:40.095146   11992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 22:54:40.097865   11992 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0513 22:54:40.100413   11992 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 22:54:40.102617   11992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 22:54:40.106082   11992 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:54:44.836967   11992 out.go:177] * Using the hyperv driver based on user configuration
	I0513 22:54:44.839893   11992 start.go:297] selected driver: hyperv
	I0513 22:54:44.839893   11992 start.go:901] validating driver "hyperv" against <nil>
	I0513 22:54:44.839893   11992 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 22:54:44.882441   11992 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 22:54:44.883559   11992 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 22:54:44.883559   11992 cni.go:84] Creating CNI manager for ""
	I0513 22:54:44.883559   11992 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0513 22:54:44.883559   11992 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0513 22:54:44.883559   11992 start.go:340] cluster config:
	{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0513 22:54:44.884556   11992 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 22:54:44.888031   11992 out.go:177] * Starting "ha-586300" primary control-plane node in "ha-586300" cluster
	I0513 22:54:44.890966   11992 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:54:44.891977   11992 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 22:54:44.891977   11992 cache.go:56] Caching tarball of preloaded images
	I0513 22:54:44.892298   11992 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 22:54:44.892298   11992 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 22:54:44.892948   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 22:54:44.893236   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json: {Name:mk9bf1a8c36fb3c2a6eb432b78e40cc7c3ec6d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:54:44.893448   11992 start.go:360] acquireMachinesLock for ha-586300: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 22:54:44.895393   11992 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-586300"
	I0513 22:54:44.895393   11992 start.go:93] Provisioning new machine with config: &{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 22:54:44.896449   11992 start.go:125] createHost starting for "" (driver="hyperv")
	I0513 22:54:44.902407   11992 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 22:54:44.902845   11992 start.go:159] libmachine.API.Create for "ha-586300" (driver="hyperv")
	I0513 22:54:44.902845   11992 client.go:168] LocalClient.Create starting
	I0513 22:54:44.903041   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0513 22:54:44.903405   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 22:54:44.903457   11992 main.go:141] libmachine: Parsing certificate...
	I0513 22:54:44.903539   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0513 22:54:44.903539   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 22:54:44.903539   11992 main.go:141] libmachine: Parsing certificate...
	I0513 22:54:44.903539   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0513 22:54:46.699025   11992 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0513 22:54:46.699025   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:46.699418   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0513 22:54:48.192453   11992 main.go:141] libmachine: [stdout =====>] : False
	
	I0513 22:54:48.193019   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:48.193081   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 22:54:49.507998   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 22:54:49.507998   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:49.508899   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 22:54:52.660535   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 22:54:52.661290   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:52.662836   11992 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0513 22:54:52.990932   11992 main.go:141] libmachine: Creating SSH key...
	I0513 22:54:53.092530   11992 main.go:141] libmachine: Creating VM...
	I0513 22:54:53.093542   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 22:54:55.554838   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 22:54:55.555680   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:55.555754   11992 main.go:141] libmachine: Using switch "Default Switch"
	I0513 22:54:55.555813   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 22:54:57.056960   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 22:54:57.056960   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:54:57.057859   11992 main.go:141] libmachine: Creating VHD
	I0513 22:54:57.057859   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0513 22:55:00.538720   11992 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6572D9F0-51A2-4A27-9519-F7574A6B3534
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0513 22:55:00.538955   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:00.538955   11992 main.go:141] libmachine: Writing magic tar header
	I0513 22:55:00.539043   11992 main.go:141] libmachine: Writing SSH key tar header
	I0513 22:55:00.548212   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0513 22:55:03.547140   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:03.547140   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:03.547571   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\disk.vhd' -SizeBytes 20000MB
	I0513 22:55:05.905765   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:05.905976   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:05.905976   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-586300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0513 22:55:09.238782   11992 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-586300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0513 22:55:09.238782   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:09.239208   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-586300 -DynamicMemoryEnabled $false
	I0513 22:55:11.253825   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:11.254432   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:11.254537   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-586300 -Count 2
	I0513 22:55:13.241001   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:13.241001   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:13.241514   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-586300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\boot2docker.iso'
	I0513 22:55:15.561644   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:15.561644   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:15.562243   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-586300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\disk.vhd'
	I0513 22:55:17.914085   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:17.914085   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:17.914085   11992 main.go:141] libmachine: Starting VM...
	I0513 22:55:17.914487   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-586300
	I0513 22:55:20.718303   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:20.718303   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:20.718303   11992 main.go:141] libmachine: Waiting for host to start...
	I0513 22:55:20.718712   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:22.768072   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:22.768072   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:22.768248   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:25.052320   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:25.052320   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:26.064864   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:28.052566   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:28.052566   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:28.053214   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:30.286852   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:30.286852   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:31.290134   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:33.242289   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:33.242289   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:33.242524   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:35.464671   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:35.464836   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:36.465663   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:38.438427   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:38.439331   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:38.439331   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:40.677929   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:55:40.677929   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:41.687410   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:43.639057   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:43.639057   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:43.639245   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:45.992991   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:55:45.992991   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:45.992991   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:47.879622   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:47.880502   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:47.880775   11992 machine.go:94] provisionDockerMachine start ...
	I0513 22:55:47.880775   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:49.806117   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:49.806117   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:49.806211   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:52.053553   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:55:52.053553   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:52.059341   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:55:52.070438   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:55:52.070499   11992 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 22:55:52.227422   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 22:55:52.227422   11992 buildroot.go:166] provisioning hostname "ha-586300"
	I0513 22:55:52.227422   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:54.095806   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:54.095806   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:54.096172   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:55:56.367278   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:55:56.367278   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:56.371142   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:55:56.371142   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:55:56.371142   11992 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-586300 && echo "ha-586300" | sudo tee /etc/hostname
	I0513 22:55:56.535204   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-586300
	
	I0513 22:55:56.535282   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:55:58.424502   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:55:58.424768   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:55:58.424871   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:00.665044   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:00.665044   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:00.668672   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:00.668672   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:00.668672   11992 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-586300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-586300/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-586300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 22:56:00.822750   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 22:56:00.822750   11992 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 22:56:00.822946   11992 buildroot.go:174] setting up certificates
	I0513 22:56:00.822946   11992 provision.go:84] configureAuth start
	I0513 22:56:00.823080   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:02.751390   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:02.751390   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:02.751867   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:04.969160   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:04.969160   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:04.970083   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:06.838970   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:06.838970   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:06.839828   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:09.094319   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:09.094647   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:09.094647   11992 provision.go:143] copyHostCerts
	I0513 22:56:09.094809   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0513 22:56:09.095023   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0513 22:56:09.095023   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0513 22:56:09.095124   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 22:56:09.096087   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0513 22:56:09.096215   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0513 22:56:09.096215   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0513 22:56:09.096215   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 22:56:09.097084   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0513 22:56:09.097234   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0513 22:56:09.097310   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0513 22:56:09.097483   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 22:56:09.098017   11992 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-586300 san=[127.0.0.1 172.23.102.229 ha-586300 localhost minikube]
	I0513 22:56:09.326691   11992 provision.go:177] copyRemoteCerts
	I0513 22:56:09.334693   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 22:56:09.334693   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:11.251332   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:11.251332   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:11.251575   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:13.501065   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:13.501295   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:13.501601   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:56:13.611146   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2762835s)
	I0513 22:56:13.611146   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 22:56:13.611774   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0513 22:56:13.656804   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 22:56:13.657415   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 22:56:13.698805   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 22:56:13.699401   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0513 22:56:13.738511   11992 provision.go:87] duration metric: took 12.9150546s to configureAuth
	I0513 22:56:13.738511   11992 buildroot.go:189] setting minikube options for container-runtime
	I0513 22:56:13.739780   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:56:13.739897   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:15.594877   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:15.594877   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:15.594877   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:17.849212   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:17.849212   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:17.853202   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:17.853619   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:17.853619   11992 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 22:56:17.998409   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 22:56:17.998409   11992 buildroot.go:70] root file system type: tmpfs
	I0513 22:56:17.998409   11992 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 22:56:17.998409   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:19.900262   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:19.900262   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:19.900321   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:22.139933   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:22.139933   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:22.143859   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:22.144235   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:22.144314   11992 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 22:56:22.305406   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 22:56:22.305511   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:24.195303   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:24.195303   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:24.196166   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:26.437076   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:26.437076   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:26.440584   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:26.441156   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:26.441156   11992 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 22:56:28.498770   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 22:56:28.498862   11992 machine.go:97] duration metric: took 40.6164827s to provisionDockerMachine
	I0513 22:56:28.498862   11992 client.go:171] duration metric: took 1m43.5919519s to LocalClient.Create
	I0513 22:56:28.498993   11992 start.go:167] duration metric: took 1m43.5920824s to libmachine.API.Create "ha-586300"
	I0513 22:56:28.499057   11992 start.go:293] postStartSetup for "ha-586300" (driver="hyperv")
	I0513 22:56:28.499057   11992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 22:56:28.510991   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 22:56:28.510991   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:30.387242   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:30.388004   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:30.388004   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:32.608187   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:32.608187   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:32.608547   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:56:32.720695   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2095372s)
	I0513 22:56:32.729981   11992 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 22:56:32.737165   11992 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 22:56:32.737165   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 22:56:32.737553   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 22:56:32.738251   11992 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0513 22:56:32.738323   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0513 22:56:32.746908   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 22:56:32.761659   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0513 22:56:32.806599   11992 start.go:296] duration metric: took 4.3073718s for postStartSetup
	I0513 22:56:32.808360   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:34.657120   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:34.657120   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:34.657196   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:36.867345   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:36.867345   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:36.867872   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 22:56:36.870218   11992 start.go:128] duration metric: took 1m51.9693716s to createHost
	I0513 22:56:36.870218   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:38.745897   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:38.745897   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:38.745897   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:40.988108   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:40.989066   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:40.992686   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:40.993342   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:40.993342   11992 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 22:56:41.127579   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715641001.293069690
	
	I0513 22:56:41.127579   11992 fix.go:216] guest clock: 1715641001.293069690
	I0513 22:56:41.127579   11992 fix.go:229] Guest: 2024-05-13 22:56:41.29306969 +0000 UTC Remote: 2024-05-13 22:56:36.8702184 +0000 UTC m=+116.953513901 (delta=4.42285129s)
	I0513 22:56:41.128281   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:42.979603   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:42.979603   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:42.979682   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:45.204612   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:45.204612   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:45.210403   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:56:45.210473   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.229 22 <nil> <nil>}
	I0513 22:56:45.210473   11992 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715641001
	I0513 22:56:45.356514   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 22:56:41 UTC 2024
	
	I0513 22:56:45.356596   11992 fix.go:236] clock set: Mon May 13 22:56:41 UTC 2024
	 (err=<nil>)
	I0513 22:56:45.356596   11992 start.go:83] releasing machines lock for "ha-586300", held for 2m0.4564688s
	I0513 22:56:45.356821   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:47.200079   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:47.200497   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:47.200497   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:49.496322   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:49.496322   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:49.501115   11992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 22:56:49.501225   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:49.510577   11992 ssh_runner.go:195] Run: cat /version.json
	I0513 22:56:49.510577   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:56:51.453113   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:51.453113   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:51.453220   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:51.453753   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:56:51.453753   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:51.453938   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:56:53.830207   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:53.830207   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:53.831385   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:56:53.849606   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:56:53.850623   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:56:53.850965   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:56:53.930608   11992 ssh_runner.go:235] Completed: cat /version.json: (4.4198556s)
	I0513 22:56:53.942212   11992 ssh_runner.go:195] Run: systemctl --version
	I0513 22:56:54.009421   11992 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5081271s)
	I0513 22:56:54.021893   11992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0513 22:56:54.031235   11992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 22:56:54.044192   11992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 22:56:54.068622   11992 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 22:56:54.068622   11992 start.go:494] detecting cgroup driver to use...
	I0513 22:56:54.069227   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 22:56:54.107429   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 22:56:54.132848   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 22:56:54.150027   11992 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 22:56:54.157905   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 22:56:54.189117   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 22:56:54.217414   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 22:56:54.246352   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 22:56:54.271134   11992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 22:56:54.296469   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 22:56:54.320335   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 22:56:54.354896   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 22:56:54.386361   11992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 22:56:54.415190   11992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 22:56:54.444589   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:56:54.612531   11992 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 22:56:54.637389   11992 start.go:494] detecting cgroup driver to use...
	I0513 22:56:54.647732   11992 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 22:56:54.676868   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 22:56:54.708618   11992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 22:56:54.743790   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 22:56:54.772289   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 22:56:54.804394   11992 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 22:56:54.863192   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 22:56:54.883694   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 22:56:54.925677   11992 ssh_runner.go:195] Run: which cri-dockerd
	I0513 22:56:54.941330   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 22:56:54.961803   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 22:56:55.000622   11992 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 22:56:55.185909   11992 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 22:56:55.353725   11992 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 22:56:55.354052   11992 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 22:56:55.397672   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:56:55.560960   11992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 22:56:58.037701   11992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4766421s)
	I0513 22:56:58.048830   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 22:56:58.082368   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 22:56:58.114531   11992 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 22:56:58.288297   11992 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 22:56:58.450940   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:56:58.636351   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 22:56:58.675165   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 22:56:58.705482   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:56:58.877626   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 22:56:58.966407   11992 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 22:56:58.975566   11992 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 22:56:58.989297   11992 start.go:562] Will wait 60s for crictl version
	I0513 22:56:58.999445   11992 ssh_runner.go:195] Run: which crictl
	I0513 22:56:59.018483   11992 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 22:56:59.064682   11992 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 22:56:59.074508   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 22:56:59.109388   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 22:56:59.140814   11992 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 22:56:59.140923   11992 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 22:56:59.144541   11992 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 22:56:59.144541   11992 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 22:56:59.145085   11992 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 22:56:59.145085   11992 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 22:56:59.148127   11992 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 22:56:59.148164   11992 ip.go:210] interface addr: 172.23.96.1/20
	I0513 22:56:59.157742   11992 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 22:56:59.163114   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 22:56:59.193916   11992 kubeadm.go:877] updating cluster {Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP
:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0513 22:56:59.194788   11992 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:56:59.199991   11992 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 22:56:59.217670   11992 docker.go:685] Got preloaded images: 
	I0513 22:56:59.217670   11992 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0513 22:56:59.225275   11992 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 22:56:59.250914   11992 ssh_runner.go:195] Run: which lz4
	I0513 22:56:59.256698   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0513 22:56:59.264911   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0513 22:56:59.271343   11992 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0513 22:56:59.271482   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0513 22:57:00.692144   11992 docker.go:649] duration metric: took 1.4348105s to copy over tarball
	I0513 22:57:00.699966   11992 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0513 22:57:09.890973   11992 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.1905603s)
	I0513 22:57:09.891134   11992 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0513 22:57:09.949474   11992 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 22:57:09.968940   11992 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0513 22:57:10.010043   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:57:10.195908   11992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 22:57:13.502028   11992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3059892s)
	I0513 22:57:13.507396   11992 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 22:57:13.528626   11992 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 22:57:13.528626   11992 cache_images.go:84] Images are preloaded, skipping loading
	I0513 22:57:13.528626   11992 kubeadm.go:928] updating node { 172.23.102.229 8443 v1.30.0 docker true true} ...
	I0513 22:57:13.528626   11992 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-586300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.102.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 22:57:13.538878   11992 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0513 22:57:13.568100   11992 cni.go:84] Creating CNI manager for ""
	I0513 22:57:13.568817   11992 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0513 22:57:13.568863   11992 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0513 22:57:13.568863   11992 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.102.229 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-586300 NodeName:ha-586300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.102.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.102.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0513 22:57:13.568863   11992 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.102.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-586300"
	  kubeletExtraArgs:
	    node-ip: 172.23.102.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.102.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0513 22:57:13.568863   11992 kube-vip.go:115] generating kube-vip config ...
	I0513 22:57:13.577147   11992 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0513 22:57:13.601883   11992 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0513 22:57:13.602736   11992 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0513 22:57:13.611440   11992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 22:57:13.631631   11992 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 22:57:13.641348   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0513 22:57:13.659492   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0513 22:57:13.685374   11992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 22:57:13.717509   11992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0513 22:57:13.746933   11992 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0513 22:57:13.787353   11992 ssh_runner.go:195] Run: grep 172.23.111.254	control-plane.minikube.internal$ /etc/hosts
	I0513 22:57:13.793176   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 22:57:13.826003   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 22:57:14.001152   11992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 22:57:14.026204   11992 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300 for IP: 172.23.102.229
	I0513 22:57:14.026204   11992 certs.go:194] generating shared ca certs ...
	I0513 22:57:14.026204   11992 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.027073   11992 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 22:57:14.027237   11992 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 22:57:14.027237   11992 certs.go:256] generating profile certs ...
	I0513 22:57:14.028011   11992 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key
	I0513 22:57:14.028093   11992 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.crt with IP's: []
	I0513 22:57:14.336335   11992 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.crt ...
	I0513 22:57:14.336335   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.crt: {Name:mk9dc4b347341b7a60c4c1778c5c41fc236f656a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.337644   11992 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key ...
	I0513 22:57:14.338198   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key: {Name:mk1658713091c08ebf368e2a1623cd79fe676f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.339054   11992 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.b1c9a291
	I0513 22:57:14.339054   11992 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.b1c9a291 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.102.229 172.23.111.254]
	I0513 22:57:14.600696   11992 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.b1c9a291 ...
	I0513 22:57:14.600696   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.b1c9a291: {Name:mk88087cc6424098a5e4267c0610ce040ed6c02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.602385   11992 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.b1c9a291 ...
	I0513 22:57:14.602385   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.b1c9a291: {Name:mk1ec9ce49999003f9d1727e5d9543b53d6d4347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.602850   11992 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.b1c9a291 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt
	I0513 22:57:14.614560   11992 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.b1c9a291 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key
	I0513 22:57:14.615549   11992 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key
	I0513 22:57:14.615549   11992 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt with IP's: []
	I0513 22:57:14.756371   11992 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt ...
	I0513 22:57:14.756371   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt: {Name:mk5e1baa9e5c947c5c2eea90c3d72bdb4ccffcb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.756747   11992 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key ...
	I0513 22:57:14.756747   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key: {Name:mk0f18f00f0a5dbad7013c9d316f7da4b9af2090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:14.757777   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 22:57:14.758803   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 22:57:14.758963   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 22:57:14.759083   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 22:57:14.759202   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0513 22:57:14.759254   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0513 22:57:14.759444   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0513 22:57:14.769688   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0513 22:57:14.771052   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0513 22:57:14.771211   11992 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0513 22:57:14.771211   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 22:57:14.771460   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 22:57:14.771648   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 22:57:14.771805   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 22:57:14.771964   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0513 22:57:14.771964   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:57:14.772440   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0513 22:57:14.772440   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0513 22:57:14.772786   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 22:57:14.819679   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 22:57:14.855282   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 22:57:14.893580   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 22:57:14.934992   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0513 22:57:14.978256   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 22:57:15.020134   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 22:57:15.060668   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0513 22:57:15.110083   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 22:57:15.151396   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0513 22:57:15.202328   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0513 22:57:15.248114   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0513 22:57:15.286786   11992 ssh_runner.go:195] Run: openssl version
	I0513 22:57:15.306764   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 22:57:15.337318   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:57:15.348218   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:57:15.359077   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 22:57:15.377093   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 22:57:15.405422   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0513 22:57:15.428760   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0513 22:57:15.436046   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 22:57:15.446148   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0513 22:57:15.462308   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0513 22:57:15.489596   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0513 22:57:15.518239   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0513 22:57:15.525660   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 22:57:15.536691   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0513 22:57:15.553797   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 22:57:15.577229   11992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 22:57:15.585803   11992 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 22:57:15.585803   11992 kubeadm.go:391] StartCluster: {Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:17
2.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:57:15.592377   11992 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 22:57:15.619384   11992 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0513 22:57:15.644222   11992 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 22:57:15.671198   11992 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 22:57:15.685583   11992 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 22:57:15.685583   11992 kubeadm.go:156] found existing configuration files:
	
	I0513 22:57:15.694426   11992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0513 22:57:15.709580   11992 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 22:57:15.721572   11992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 22:57:15.744066   11992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0513 22:57:15.760277   11992 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 22:57:15.772496   11992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 22:57:15.797916   11992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0513 22:57:15.812649   11992 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 22:57:15.823721   11992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 22:57:15.849528   11992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0513 22:57:15.864212   11992 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 22:57:15.875186   11992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 22:57:15.890922   11992 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0513 22:57:16.245420   11992 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0513 22:57:28.912636   11992 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0513 22:57:28.912912   11992 kubeadm.go:309] [preflight] Running pre-flight checks
	I0513 22:57:28.913304   11992 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0513 22:57:28.913522   11992 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0513 22:57:28.914059   11992 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0513 22:57:28.914231   11992 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0513 22:57:28.917716   11992 out.go:204]   - Generating certificates and keys ...
	I0513 22:57:28.917901   11992 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0513 22:57:28.917901   11992 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0513 22:57:28.917901   11992 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0513 22:57:28.918460   11992 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0513 22:57:28.918714   11992 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0513 22:57:28.918827   11992 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0513 22:57:28.918827   11992 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0513 22:57:28.919172   11992 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-586300 localhost] and IPs [172.23.102.229 127.0.0.1 ::1]
	I0513 22:57:28.919172   11992 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0513 22:57:28.919508   11992 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-586300 localhost] and IPs [172.23.102.229 127.0.0.1 ::1]
	I0513 22:57:28.919508   11992 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0513 22:57:28.919508   11992 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0513 22:57:28.919508   11992 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0513 22:57:28.920057   11992 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0513 22:57:28.920057   11992 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0513 22:57:28.920269   11992 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0513 22:57:28.920269   11992 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0513 22:57:28.920269   11992 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0513 22:57:28.920269   11992 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0513 22:57:28.920818   11992 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0513 22:57:28.920977   11992 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0513 22:57:28.923966   11992 out.go:204]   - Booting up control plane ...
	I0513 22:57:28.924629   11992 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0513 22:57:28.924629   11992 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0513 22:57:28.924629   11992 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0513 22:57:28.925231   11992 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 22:57:28.925474   11992 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 22:57:28.925474   11992 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0513 22:57:28.925712   11992 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0513 22:57:28.925712   11992 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0513 22:57:28.925712   11992 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002877081s
	I0513 22:57:28.926245   11992 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0513 22:57:28.926297   11992 kubeadm.go:309] [api-check] The API server is healthy after 7.003004483s
	I0513 22:57:28.926297   11992 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0513 22:57:28.926297   11992 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0513 22:57:28.926917   11992 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0513 22:57:28.927156   11992 kubeadm.go:309] [mark-control-plane] Marking the node ha-586300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0513 22:57:28.927432   11992 kubeadm.go:309] [bootstrap-token] Using token: ynj82i.n6eonv2vordb1vfy
	I0513 22:57:28.930010   11992 out.go:204]   - Configuring RBAC rules ...
	I0513 22:57:28.931149   11992 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0513 22:57:28.931149   11992 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0513 22:57:28.931673   11992 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0513 22:57:28.931702   11992 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0513 22:57:28.931702   11992 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0513 22:57:28.932258   11992 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0513 22:57:28.932423   11992 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0513 22:57:28.932624   11992 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0513 22:57:28.932624   11992 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0513 22:57:28.932624   11992 kubeadm.go:309] 
	I0513 22:57:28.932624   11992 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0513 22:57:28.932624   11992 kubeadm.go:309] 
	I0513 22:57:28.932624   11992 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0513 22:57:28.932624   11992 kubeadm.go:309] 
	I0513 22:57:28.932624   11992 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0513 22:57:28.933257   11992 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0513 22:57:28.933357   11992 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0513 22:57:28.933357   11992 kubeadm.go:309] 
	I0513 22:57:28.933357   11992 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0513 22:57:28.933357   11992 kubeadm.go:309] 
	I0513 22:57:28.933357   11992 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0513 22:57:28.933357   11992 kubeadm.go:309] 
	I0513 22:57:28.933357   11992 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0513 22:57:28.933906   11992 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0513 22:57:28.933959   11992 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0513 22:57:28.933959   11992 kubeadm.go:309] 
	I0513 22:57:28.933959   11992 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0513 22:57:28.933959   11992 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0513 22:57:28.933959   11992 kubeadm.go:309] 
	I0513 22:57:28.934623   11992 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ynj82i.n6eonv2vordb1vfy \
	I0513 22:57:28.934836   11992 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 \
	I0513 22:57:28.934878   11992 kubeadm.go:309] 	--control-plane 
	I0513 22:57:28.934960   11992 kubeadm.go:309] 
	I0513 22:57:28.934960   11992 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0513 22:57:28.935172   11992 kubeadm.go:309] 
	I0513 22:57:28.935172   11992 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ynj82i.n6eonv2vordb1vfy \
	I0513 22:57:28.935919   11992 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0513 22:57:28.936717   11992 cni.go:84] Creating CNI manager for ""
	I0513 22:57:28.936717   11992 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0513 22:57:28.942702   11992 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0513 22:57:28.953617   11992 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0513 22:57:28.961223   11992 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0513 22:57:28.961272   11992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0513 22:57:29.001252   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0513 22:57:29.488035   11992 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0513 22:57:29.501285   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-586300 minikube.k8s.io/updated_at=2024_05_13T22_57_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=ha-586300 minikube.k8s.io/primary=true
	I0513 22:57:29.503506   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:29.541845   11992 ops.go:34] apiserver oom_adj: -16
	I0513 22:57:29.721723   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:30.231002   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:30.740602   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:31.238191   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:31.743211   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:32.225571   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:32.729560   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:33.227963   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:33.726231   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:34.231133   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:34.729556   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:35.228582   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:35.734606   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:36.233364   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:36.738564   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:37.227234   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:37.732458   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:38.229612   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:38.728531   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:39.230002   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:39.731046   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:40.231116   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:40.726192   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:41.236256   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:41.740854   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:42.228034   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 22:57:42.381255   11992 kubeadm.go:1107] duration metric: took 12.8927049s to wait for elevateKubeSystemPrivileges
	W0513 22:57:42.381740   11992 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0513 22:57:42.381740   11992 kubeadm.go:393] duration metric: took 26.7948701s to StartCluster
	I0513 22:57:42.381830   11992 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:42.381943   11992 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:57:42.384248   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:57:42.386083   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0513 22:57:42.386330   11992 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 22:57:42.386641   11992 start.go:240] waiting for startup goroutines ...
	I0513 22:57:42.386330   11992 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 22:57:42.386691   11992 addons.go:69] Setting default-storageclass=true in profile "ha-586300"
	I0513 22:57:42.386691   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:57:42.386691   11992 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-586300"
	I0513 22:57:42.386691   11992 addons.go:69] Setting storage-provisioner=true in profile "ha-586300"
	I0513 22:57:42.386691   11992 addons.go:234] Setting addon storage-provisioner=true in "ha-586300"
	I0513 22:57:42.387293   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 22:57:42.387623   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:57:42.388370   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:57:42.561504   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0513 22:57:42.913111   11992 start.go:946] {"host.minikube.internal": 172.23.96.1} host record injected into CoreDNS's ConfigMap
	I0513 22:57:44.473993   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:57:44.473993   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:44.478219   11992 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 22:57:44.481630   11992 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 22:57:44.481630   11992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 22:57:44.481801   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:57:44.493890   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:57:44.493966   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:44.494606   11992 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:57:44.494606   11992 kapi.go:59] client config for ha-586300: &rest.Config{Host:"https://172.23.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 22:57:44.496316   11992 cert_rotation.go:137] Starting client certificate rotation controller
	I0513 22:57:44.496580   11992 addons.go:234] Setting addon default-storageclass=true in "ha-586300"
	I0513 22:57:44.496580   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 22:57:44.497439   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:57:46.544655   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:57:46.544655   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:46.544879   11992 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 22:57:46.544879   11992 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 22:57:46.544879   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 22:57:46.545914   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:57:46.545914   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:46.545974   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:57:48.598859   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:57:48.598859   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:48.599115   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 22:57:48.976593   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:57:48.976593   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:48.976593   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:57:49.114228   11992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 22:57:50.961504   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 22:57:50.961504   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:50.962143   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 22:57:51.095010   11992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 22:57:51.250573   11992 round_trippers.go:463] GET https://172.23.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0513 22:57:51.250573   11992 round_trippers.go:469] Request Headers:
	I0513 22:57:51.250573   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:57:51.250573   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:57:51.264154   11992 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0513 22:57:51.264764   11992 round_trippers.go:463] PUT https://172.23.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0513 22:57:51.264764   11992 round_trippers.go:469] Request Headers:
	I0513 22:57:51.264764   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 22:57:51.264764   11992 round_trippers.go:473]     Content-Type: application/json
	I0513 22:57:51.264764   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 22:57:51.267943   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 22:57:51.272284   11992 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0513 22:57:51.275496   11992 addons.go:505] duration metric: took 8.8889015s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0513 22:57:51.275496   11992 start.go:245] waiting for cluster config update ...
	I0513 22:57:51.275496   11992 start.go:254] writing updated cluster config ...
	I0513 22:57:51.278672   11992 out.go:177] 
	I0513 22:57:51.290807   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:57:51.290807   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 22:57:51.295960   11992 out.go:177] * Starting "ha-586300-m02" control-plane node in "ha-586300" cluster
	I0513 22:57:51.298225   11992 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:57:51.298632   11992 cache.go:56] Caching tarball of preloaded images
	I0513 22:57:51.298632   11992 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 22:57:51.298632   11992 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 22:57:51.299361   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 22:57:51.304796   11992 start.go:360] acquireMachinesLock for ha-586300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 22:57:51.304796   11992 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-586300-m02"
	I0513 22:57:51.304796   11992 start.go:93] Provisioning new machine with config: &{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:def
ault APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 22:57:51.304796   11992 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0513 22:57:51.310112   11992 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 22:57:51.310112   11992 start.go:159] libmachine.API.Create for "ha-586300" (driver="hyperv")
	I0513 22:57:51.310112   11992 client.go:168] LocalClient.Create starting
	I0513 22:57:51.310650   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0513 22:57:51.310773   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 22:57:51.310773   11992 main.go:141] libmachine: Parsing certificate...
	I0513 22:57:51.310773   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0513 22:57:51.310773   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 22:57:51.310773   11992 main.go:141] libmachine: Parsing certificate...
	I0513 22:57:51.310773   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0513 22:57:52.957696   11992 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0513 22:57:52.957696   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:52.958200   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0513 22:57:54.540990   11992 main.go:141] libmachine: [stdout =====>] : False
	
	I0513 22:57:54.540990   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:54.540990   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 22:57:55.937657   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 22:57:55.937657   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:55.937657   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 22:57:59.127033   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 22:57:59.127713   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:57:59.129585   11992 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0513 22:57:59.481868   11992 main.go:141] libmachine: Creating SSH key...
	I0513 22:57:59.666272   11992 main.go:141] libmachine: Creating VM...
	I0513 22:57:59.666272   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 22:58:02.254045   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 22:58:02.254045   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:02.254045   11992 main.go:141] libmachine: Using switch "Default Switch"
	I0513 22:58:02.254045   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 22:58:03.856076   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 22:58:03.856595   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:03.856595   11992 main.go:141] libmachine: Creating VHD
	I0513 22:58:03.856668   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0513 22:58:07.328003   11992 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7ED062D4-E020-43AF-A7EC-0E9D8E0256F5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0513 22:58:07.328003   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:07.328078   11992 main.go:141] libmachine: Writing magic tar header
	I0513 22:58:07.328078   11992 main.go:141] libmachine: Writing SSH key tar header
	I0513 22:58:07.336465   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0513 22:58:10.263301   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:10.264335   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:10.264380   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\disk.vhd' -SizeBytes 20000MB
	I0513 22:58:12.596156   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:12.596221   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:12.596221   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-586300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0513 22:58:15.803398   11992 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-586300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0513 22:58:15.803662   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:15.803662   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-586300-m02 -DynamicMemoryEnabled $false
	I0513 22:58:17.789394   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:17.790107   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:17.790107   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-586300-m02 -Count 2
	I0513 22:58:19.723214   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:19.723214   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:19.723598   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-586300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\boot2docker.iso'
	I0513 22:58:22.033949   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:22.033949   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:22.033949   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-586300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\disk.vhd'
	I0513 22:58:24.391458   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:24.391458   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:24.391458   11992 main.go:141] libmachine: Starting VM...
	I0513 22:58:24.391527   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-586300-m02
	I0513 22:58:27.153185   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:27.153185   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:27.153185   11992 main.go:141] libmachine: Waiting for host to start...
	I0513 22:58:27.153185   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:29.161623   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:29.162065   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:29.162086   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:31.357356   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:31.357356   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:32.366040   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:34.319749   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:34.319749   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:34.319749   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:36.568076   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:36.568348   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:37.568577   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:39.547280   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:39.547340   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:39.547545   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:41.835281   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:41.835528   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:42.839818   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:44.808909   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:44.808909   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:44.809007   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:47.031235   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 22:58:47.031235   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:48.046089   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:49.971223   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:49.971223   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:49.971901   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:52.319200   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:58:52.319200   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:52.319669   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:54.219331   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:54.219784   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:54.219784   11992 machine.go:94] provisionDockerMachine start ...
	I0513 22:58:54.219868   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:58:56.133003   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:58:56.133286   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:56.133399   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:58:58.370942   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:58:58.370994   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:58:58.373966   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:58:58.385180   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:58:58.385180   11992 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 22:58:58.512374   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 22:58:58.512374   11992 buildroot.go:166] provisioning hostname "ha-586300-m02"
	I0513 22:58:58.512374   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:00.380480   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:00.380480   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:00.380480   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:02.622182   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:02.623005   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:02.628418   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:02.628418   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:02.628418   11992 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-586300-m02 && echo "ha-586300-m02" | sudo tee /etc/hostname
	I0513 22:59:02.791744   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-586300-m02
	
	I0513 22:59:02.791744   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:04.714403   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:04.714403   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:04.714604   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:06.977055   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:06.977435   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:06.981289   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:06.981462   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:06.981462   11992 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-586300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-586300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-586300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 22:59:07.110619   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 22:59:07.110619   11992 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 22:59:07.110619   11992 buildroot.go:174] setting up certificates
	I0513 22:59:07.110619   11992 provision.go:84] configureAuth start
	I0513 22:59:07.110619   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:09.021585   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:09.021585   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:09.022521   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:11.276521   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:11.276877   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:11.276877   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:13.173312   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:13.174339   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:13.174415   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:15.439712   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:15.440140   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:15.440140   11992 provision.go:143] copyHostCerts
	I0513 22:59:15.440272   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0513 22:59:15.440272   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0513 22:59:15.440272   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0513 22:59:15.440272   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 22:59:15.441749   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0513 22:59:15.441899   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0513 22:59:15.441972   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0513 22:59:15.442207   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 22:59:15.442913   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0513 22:59:15.443083   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0513 22:59:15.443167   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0513 22:59:15.443403   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 22:59:15.444175   11992 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-586300-m02 san=[127.0.0.1 172.23.108.68 ha-586300-m02 localhost minikube]
	I0513 22:59:15.589413   11992 provision.go:177] copyRemoteCerts
	I0513 22:59:15.598046   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 22:59:15.598046   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:17.541658   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:17.541658   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:17.541932   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:19.810964   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:19.810964   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:19.811333   11992 sshutil.go:53] new ssh client: &{IP:172.23.108.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 22:59:19.905707   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3074896s)
	I0513 22:59:19.905778   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 22:59:19.905778   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 22:59:19.956937   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 22:59:19.956937   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0513 22:59:19.998480   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 22:59:19.998942   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 22:59:20.048199   11992 provision.go:87] duration metric: took 12.9370024s to configureAuth
	I0513 22:59:20.048254   11992 buildroot.go:189] setting minikube options for container-runtime
	I0513 22:59:20.049043   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:59:20.049186   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:21.952491   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:21.953462   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:21.953462   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:24.184676   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:24.184676   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:24.189694   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:24.189694   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:24.189694   11992 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 22:59:24.313297   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 22:59:24.313297   11992 buildroot.go:70] root file system type: tmpfs
	I0513 22:59:24.313297   11992 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 22:59:24.313832   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:26.225193   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:26.225695   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:26.225746   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:28.488208   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:28.488208   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:28.491697   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:28.492302   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:28.492302   11992 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.102.229"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 22:59:28.640364   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.102.229
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 22:59:28.640364   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:30.570651   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:30.570802   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:30.570802   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:32.822577   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:32.822577   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:32.826772   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:32.826772   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:32.826772   11992 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 22:59:34.888133   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 22:59:34.888133   11992 machine.go:97] duration metric: took 40.6667265s to provisionDockerMachine
	I0513 22:59:34.888133   11992 client.go:171] duration metric: took 1m43.5738887s to LocalClient.Create
	I0513 22:59:34.888133   11992 start.go:167] duration metric: took 1m43.5738887s to libmachine.API.Create "ha-586300"
	I0513 22:59:34.888133   11992 start.go:293] postStartSetup for "ha-586300-m02" (driver="hyperv")
	I0513 22:59:34.888133   11992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 22:59:34.896119   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 22:59:34.896119   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:36.764835   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:36.764835   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:36.764912   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:39.020300   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:39.020361   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:39.020418   11992 sshutil.go:53] new ssh client: &{IP:172.23.108.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 22:59:39.125680   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2293532s)
	I0513 22:59:39.133713   11992 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 22:59:39.140929   11992 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 22:59:39.140929   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 22:59:39.140929   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 22:59:39.141974   11992 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0513 22:59:39.141974   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0513 22:59:39.150052   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 22:59:39.165785   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0513 22:59:39.210255   11992 start.go:296] duration metric: took 4.3219487s for postStartSetup
	I0513 22:59:39.212337   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:41.100845   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:41.101688   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:41.101782   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:43.400172   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:43.400172   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:43.400644   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 22:59:43.401847   11992 start.go:128] duration metric: took 1m52.0925773s to createHost
	I0513 22:59:43.402376   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:45.270655   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:45.271313   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:45.271313   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:47.548125   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:47.548125   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:47.552502   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:47.552756   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:47.552756   11992 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 22:59:47.679763   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715641187.839715177
	
	I0513 22:59:47.679866   11992 fix.go:216] guest clock: 1715641187.839715177
	I0513 22:59:47.679866   11992 fix.go:229] Guest: 2024-05-13 22:59:47.839715177 +0000 UTC Remote: 2024-05-13 22:59:43.4018473 +0000 UTC m=+303.477709501 (delta=4.437867877s)
	I0513 22:59:47.679866   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:49.530810   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:49.530810   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:49.530810   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:51.763607   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:51.764158   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:51.767068   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 22:59:51.767643   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.108.68 22 <nil> <nil>}
	I0513 22:59:51.767643   11992 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715641187
	I0513 22:59:51.906004   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 22:59:47 UTC 2024
	
	I0513 22:59:51.906004   11992 fix.go:236] clock set: Mon May 13 22:59:47 UTC 2024
	 (err=<nil>)
	I0513 22:59:51.906004   11992 start.go:83] releasing machines lock for "ha-586300-m02", held for 2m0.5963942s
	I0513 22:59:51.906622   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:53.780156   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:53.780156   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:53.780156   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:56.017431   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 22:59:56.018024   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:56.022985   11992 out.go:177] * Found network options:
	I0513 22:59:56.025146   11992 out.go:177]   - NO_PROXY=172.23.102.229
	W0513 22:59:56.027581   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	I0513 22:59:56.028961   11992 out.go:177]   - NO_PROXY=172.23.102.229
	W0513 22:59:56.031851   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 22:59:56.033025   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	I0513 22:59:56.034881   11992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 22:59:56.034881   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:56.041879   11992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0513 22:59:56.041879   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 22:59:57.987643   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:57.987643   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:57.987643   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 22:59:57.988248   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 22:59:57.988248   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 22:59:57.988248   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:00:00.322394   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 23:00:00.322394   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:00.322394   11992 sshutil.go:53] new ssh client: &{IP:172.23.108.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 23:00:00.345380   11992 main.go:141] libmachine: [stdout =====>] : 172.23.108.68
	
	I0513 23:00:00.345380   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:00.345380   11992 sshutil.go:53] new ssh client: &{IP:172.23.108.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m02\id_rsa Username:docker}
	I0513 23:00:00.422818   11992 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3807633s)
	W0513 23:00:00.422818   11992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 23:00:00.433044   11992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 23:00:00.644064   11992 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 23:00:00.644064   11992 start.go:494] detecting cgroup driver to use...
	I0513 23:00:00.644064   11992 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6089981s)
	I0513 23:00:00.644064   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:00:00.685759   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 23:00:00.712915   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 23:00:00.731935   11992 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 23:00:00.741228   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 23:00:00.767732   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:00:00.793189   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 23:00:00.820647   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:00:00.849444   11992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 23:00:00.879897   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 23:00:00.906252   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 23:00:00.933363   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 23:00:00.958899   11992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 23:00:00.984035   11992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 23:00:01.008999   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:00:01.201889   11992 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 23:00:01.234211   11992 start.go:494] detecting cgroup driver to use...
	I0513 23:00:01.244060   11992 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 23:00:01.276310   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:00:01.309849   11992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 23:00:01.357918   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:00:01.394200   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:00:01.425951   11992 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 23:00:01.493157   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:00:01.518492   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:00:01.564643   11992 ssh_runner.go:195] Run: which cri-dockerd
	I0513 23:00:01.581079   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 23:00:01.599043   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 23:00:01.638891   11992 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 23:00:01.846689   11992 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 23:00:02.019200   11992 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 23:00:02.019200   11992 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 23:00:02.064212   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:00:02.254716   11992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 23:00:04.767547   11992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5127308s)
	I0513 23:00:04.776622   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 23:00:04.808242   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:00:04.845442   11992 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 23:00:05.029236   11992 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 23:00:05.212292   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:00:05.387410   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 23:00:05.425971   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:00:05.458973   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:00:05.651539   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 23:00:05.753117   11992 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 23:00:05.761858   11992 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 23:00:05.770723   11992 start.go:562] Will wait 60s for crictl version
	I0513 23:00:05.782718   11992 ssh_runner.go:195] Run: which crictl
	I0513 23:00:05.797063   11992 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 23:00:05.852053   11992 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 23:00:05.861262   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:00:05.896587   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:00:05.928336   11992 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 23:00:05.930917   11992 out.go:177]   - env NO_PROXY=172.23.102.229
	I0513 23:00:05.933280   11992 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 23:00:05.936777   11992 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 23:00:05.936777   11992 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 23:00:05.936777   11992 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 23:00:05.936777   11992 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 23:00:05.939718   11992 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 23:00:05.939718   11992 ip.go:210] interface addr: 172.23.96.1/20
	I0513 23:00:05.947539   11992 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 23:00:05.953636   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:00:05.974279   11992 mustload.go:65] Loading cluster: ha-586300
	I0513 23:00:05.974772   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:00:05.974772   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:00:07.984072   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:00:07.984072   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:07.984072   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:00:07.984784   11992 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300 for IP: 172.23.108.68
	I0513 23:00:07.984784   11992 certs.go:194] generating shared ca certs ...
	I0513 23:00:07.984784   11992 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:00:07.985484   11992 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 23:00:07.985484   11992 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 23:00:07.985484   11992 certs.go:256] generating profile certs ...
	I0513 23:00:07.986386   11992 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key
	I0513 23:00:07.986561   11992 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.6bf21e4f
	I0513 23:00:07.986588   11992 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.6bf21e4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.102.229 172.23.108.68 172.23.111.254]
	I0513 23:00:08.079753   11992 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.6bf21e4f ...
	I0513 23:00:08.079753   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.6bf21e4f: {Name:mk3b4d314abff0859b142f769105005e7fbc5a7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:00:08.080760   11992 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.6bf21e4f ...
	I0513 23:00:08.080760   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.6bf21e4f: {Name:mk35b31305d5e6a9cf5203f7fcdff538d0954aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:00:08.081811   11992 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.6bf21e4f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt
	I0513 23:00:08.091615   11992 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.6bf21e4f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key
	I0513 23:00:08.093334   11992 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key
	I0513 23:00:08.093334   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 23:00:08.094342   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 23:00:08.094503   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 23:00:08.094569   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 23:00:08.094702   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0513 23:00:08.094765   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0513 23:00:08.095345   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0513 23:00:08.095418   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0513 23:00:08.095799   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0513 23:00:08.095995   11992 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0513 23:00:08.096091   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 23:00:08.096303   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 23:00:08.096498   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 23:00:08.096647   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 23:00:08.096967   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0513 23:00:08.097143   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:00:08.097268   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0513 23:00:08.097325   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0513 23:00:08.097508   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:00:10.108235   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:00:10.108235   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:10.108346   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:00:12.468157   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:00:12.468157   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:12.468157   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 23:00:12.570375   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0513 23:00:12.579310   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0513 23:00:12.615150   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0513 23:00:12.626659   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0513 23:00:12.657992   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0513 23:00:12.663601   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0513 23:00:12.691635   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0513 23:00:12.698400   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0513 23:00:12.732891   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0513 23:00:12.742921   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0513 23:00:12.772962   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0513 23:00:12.784195   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0513 23:00:12.806361   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 23:00:12.852255   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 23:00:12.895318   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 23:00:12.937376   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 23:00:12.980226   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0513 23:00:13.022601   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 23:00:13.065569   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 23:00:13.111967   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0513 23:00:13.156293   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 23:00:13.196200   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0513 23:00:13.240832   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0513 23:00:13.288519   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0513 23:00:13.317884   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0513 23:00:13.346970   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0513 23:00:13.375433   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0513 23:00:13.406787   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0513 23:00:13.436213   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0513 23:00:13.466442   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0513 23:00:13.508092   11992 ssh_runner.go:195] Run: openssl version
	I0513 23:00:13.525525   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 23:00:13.553663   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:00:13.560942   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:00:13.569676   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:00:13.586050   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 23:00:13.613066   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0513 23:00:13.642754   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0513 23:00:13.649805   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 23:00:13.660330   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0513 23:00:13.680305   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0513 23:00:13.712275   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0513 23:00:13.741417   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0513 23:00:13.748831   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 23:00:13.756000   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0513 23:00:13.773591   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 23:00:13.803117   11992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 23:00:13.809118   11992 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 23:00:13.809118   11992 kubeadm.go:928] updating node {m02 172.23.108.68 8443 v1.30.0 docker true true} ...
	I0513 23:00:13.810131   11992 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-586300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.108.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 23:00:13.810131   11992 kube-vip.go:115] generating kube-vip config ...
	I0513 23:00:13.818253   11992 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0513 23:00:13.842275   11992 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0513 23:00:13.842275   11992 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0513 23:00:13.855571   11992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 23:00:13.871147   11992 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0513 23:00:13.879558   11992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0513 23:00:13.899813   11992 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0513 23:00:13.900458   11992 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0513 23:00:13.900458   11992 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0513 23:00:15.081118   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0513 23:00:15.088574   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0513 23:00:15.098760   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0513 23:00:15.099769   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0513 23:00:15.628406   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0513 23:00:15.637970   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0513 23:00:15.646995   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0513 23:00:15.646995   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0513 23:00:16.808169   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:00:16.832141   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0513 23:00:16.840678   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0513 23:00:16.846804   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0513 23:00:16.846804   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0513 23:00:17.393748   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0513 23:00:17.410092   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0513 23:00:17.440755   11992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 23:00:17.473050   11992 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0513 23:00:17.510522   11992 ssh_runner.go:195] Run: grep 172.23.111.254	control-plane.minikube.internal$ /etc/hosts
	I0513 23:00:17.517424   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:00:17.550027   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:00:17.744273   11992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:00:17.774186   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:00:17.774909   11992 start.go:316] joinCluster: &{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.
23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.108.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\j
enkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:00:17.774909   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0513 23:00:17.774909   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:00:19.725520   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:00:19.725587   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:19.725587   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:00:22.013064   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:00:22.013805   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:00:22.014019   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 23:00:22.226738   11992 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4516521s)
	I0513 23:00:22.226738   11992 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.23.108.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:00:22.226738   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n5djd1.506c2oeaejp22c1d --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-586300-m02 --control-plane --apiserver-advertise-address=172.23.108.68 --apiserver-bind-port=8443"
	I0513 23:01:02.741253   11992 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n5djd1.506c2oeaejp22c1d --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-586300-m02 --control-plane --apiserver-advertise-address=172.23.108.68 --apiserver-bind-port=8443": (40.5128976s)
	I0513 23:01:02.741332   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0513 23:01:03.463797   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-586300-m02 minikube.k8s.io/updated_at=2024_05_13T23_01_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=ha-586300 minikube.k8s.io/primary=false
	I0513 23:01:03.684131   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-586300-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0513 23:01:03.847148   11992 start.go:318] duration metric: took 46.0704007s to joinCluster
	I0513 23:01:03.847388   11992 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.108.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:01:03.850772   11992 out.go:177] * Verifying Kubernetes components...
	I0513 23:01:03.848379   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:01:03.862778   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:01:04.272017   11992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:01:04.306002   11992 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:01:04.307037   11992 kapi.go:59] client config for ha-586300: &rest.Config{Host:"https://172.23.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0513 23:01:04.307037   11992 kubeadm.go:477] Overriding stale ClientConfig host https://172.23.111.254:8443 with https://172.23.102.229:8443
	I0513 23:01:04.307992   11992 node_ready.go:35] waiting up to 6m0s for node "ha-586300-m02" to be "Ready" ...
	I0513 23:01:04.307992   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:04.307992   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:04.307992   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:04.307992   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:04.322756   11992 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0513 23:01:04.824116   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:04.824193   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:04.824193   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:04.824193   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:04.836422   11992 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0513 23:01:05.318663   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:05.318696   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:05.318696   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:05.318696   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:05.328726   11992 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0513 23:01:05.808544   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:05.808777   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:05.808777   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:05.808777   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:05.814209   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:06.313137   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:06.313200   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:06.313200   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:06.313200   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:06.317911   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:06.318588   11992 node_ready.go:53] node "ha-586300-m02" has status "Ready":"False"
	I0513 23:01:06.821279   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:06.821279   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:06.821279   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:06.821362   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:06.826011   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:07.314913   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:07.314913   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:07.314913   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:07.314913   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:07.319528   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:07.823280   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:07.823280   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:07.823280   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:07.823280   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:07.827866   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:08.319047   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:08.319047   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:08.319047   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:08.319047   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:08.631292   11992 round_trippers.go:574] Response Status: 200 OK in 312 milliseconds
	I0513 23:01:08.631921   11992 node_ready.go:53] node "ha-586300-m02" has status "Ready":"False"
	I0513 23:01:08.822614   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:08.822614   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:08.822614   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:08.822614   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:08.827064   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:09.312276   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:09.312374   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:09.312374   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:09.312374   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:09.317035   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:09.819485   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:09.819485   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:09.819485   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:09.819485   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:09.824333   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:10.317579   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:10.317579   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.317579   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.317579   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.322414   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:10.818299   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:10.818299   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.818299   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.818299   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.825888   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:01:10.827436   11992 node_ready.go:49] node "ha-586300-m02" has status "Ready":"True"
	I0513 23:01:10.827551   11992 node_ready.go:38] duration metric: took 6.5192998s for node "ha-586300-m02" to be "Ready" ...
	I0513 23:01:10.827606   11992 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:01:10.827724   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:01:10.827724   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.827724   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.827724   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.840007   11992 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0513 23:01:10.849223   11992 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.849223   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4qbhd
	I0513 23:01:10.849223   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.849223   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.849223   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.853297   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:10.854364   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:10.854364   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.854364   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.854364   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.858290   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.859217   11992 pod_ready.go:92] pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:10.859217   11992 pod_ready.go:81] duration metric: took 9.9937ms for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.859217   11992 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.859331   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wj8z7
	I0513 23:01:10.859370   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.859370   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.859370   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.862568   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.864055   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:10.864109   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.864109   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.864109   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.868063   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.869561   11992 pod_ready.go:92] pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:10.869647   11992 pod_ready.go:81] duration metric: took 10.4295ms for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.869647   11992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.869914   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300
	I0513 23:01:10.869914   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.869914   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.869914   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.873231   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.874701   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:10.874787   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.874787   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.874787   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.878367   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.879610   11992 pod_ready.go:92] pod "etcd-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:10.879610   11992 pod_ready.go:81] duration metric: took 9.9627ms for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.879683   11992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:10.879754   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:10.879793   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.879793   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.879793   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.883561   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:10.884841   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:10.884841   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:10.884841   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:10.884892   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:10.888707   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:11.392915   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:11.393007   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:11.393007   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:11.393007   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:11.400147   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:01:11.401023   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:11.401023   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:11.401023   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:11.401023   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:11.405957   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:11.892790   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:11.892863   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:11.892863   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:11.892863   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:11.897522   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:11.899057   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:11.899057   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:11.899057   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:11.899057   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:11.904191   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:12.390988   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:12.390988   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:12.390988   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:12.390988   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:12.398885   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:01:12.399951   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:12.399951   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:12.399951   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:12.399951   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:12.404548   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:12.890519   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:12.890603   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:12.890603   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:12.890603   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:12.896500   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:12.897684   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:12.897684   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:12.897684   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:12.897684   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:12.902251   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:12.904041   11992 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:01:13.390520   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:13.390520   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:13.390766   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:13.390766   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:13.395035   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:13.396915   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:13.397006   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:13.397006   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:13.397006   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:13.400181   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:13.890540   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:13.890640   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:13.890640   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:13.890719   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:13.895684   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:13.896782   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:13.896857   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:13.896857   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:13.896857   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:13.900997   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:14.392366   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:14.392453   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:14.392453   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:14.392453   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:14.396682   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:14.398198   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:14.398198   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:14.398198   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:14.398198   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:14.402168   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:14.890162   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:14.890547   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:14.890547   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:14.890547   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:14.895316   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:14.896237   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:14.896341   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:14.896341   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:14.896341   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:14.899638   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:15.391924   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:15.391924   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:15.391924   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:15.391924   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:15.396698   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:15.398453   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:15.398516   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:15.398516   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:15.398516   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:15.404029   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:15.405311   11992 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:01:15.888629   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:15.888719   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:15.888804   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:15.888804   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:15.893471   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:15.894469   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:15.894469   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:15.894469   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:15.894469   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:15.898582   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:16.391866   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:16.391946   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:16.391946   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:16.392023   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:16.397803   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:16.398996   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:16.398996   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:16.399071   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:16.399071   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:16.403153   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:16.888988   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:16.888988   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:16.888988   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:16.888988   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:16.893258   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:16.894492   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:16.894553   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:16.894553   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:16.894553   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:16.900351   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:17.392447   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:17.392447   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:17.392447   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:17.392447   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:17.396482   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:17.397906   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:17.397906   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:17.397906   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:17.397906   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:17.402655   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:17.893760   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:17.893857   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:17.893857   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:17.893857   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:17.902058   11992 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:01:17.903267   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:17.903329   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:17.903329   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:17.903329   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:17.906596   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:17.908177   11992 pod_ready.go:102] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"False"
	I0513 23:01:18.381210   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:18.381210   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.381294   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.381294   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.389357   11992 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:01:18.390375   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:18.390375   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.390409   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.390409   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.394536   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:18.886643   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:01:18.886717   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.886717   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.886717   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.895343   11992 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:01:18.896356   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:18.896356   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.896356   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.896356   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.902590   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:01:18.903417   11992 pod_ready.go:92] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:18.903417   11992 pod_ready.go:81] duration metric: took 8.0234142s for pod "etcd-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.903417   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.903417   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300
	I0513 23:01:18.903417   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.903417   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.903417   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.908034   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:18.909571   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:18.909599   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.909599   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.909599   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.913508   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:18.914873   11992 pod_ready.go:92] pod "kube-apiserver-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:18.914873   11992 pod_ready.go:81] duration metric: took 11.4558ms for pod "kube-apiserver-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.914873   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.914956   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m02
	I0513 23:01:18.915032   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.915032   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.915032   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.919147   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:18.919147   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:18.920489   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.920489   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.920489   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.923248   11992 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:01:18.923729   11992 pod_ready.go:92] pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:18.923729   11992 pod_ready.go:81] duration metric: took 8.8555ms for pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.923729   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.923729   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300
	I0513 23:01:18.923729   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.923729   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.923729   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.927884   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:18.927977   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:18.927977   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.927977   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.927977   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.932598   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:18.933372   11992 pod_ready.go:92] pod "kube-controller-manager-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:18.933372   11992 pod_ready.go:81] duration metric: took 9.6423ms for pod "kube-controller-manager-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.933372   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.933487   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300-m02
	I0513 23:01:18.933487   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.933487   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.933487   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.938742   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:18.939575   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:18.939575   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:18.939575   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:18.939575   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:18.943421   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:01:18.944625   11992 pod_ready.go:92] pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:18.944674   11992 pod_ready.go:81] duration metric: took 11.2482ms for pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:18.944729   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6mpjv" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:19.089651   11992 request.go:629] Waited for 144.5515ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6mpjv
	I0513 23:01:19.089737   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6mpjv
	I0513 23:01:19.089737   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:19.089737   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:19.089737   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:19.095805   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:19.292653   11992 request.go:629] Waited for 195.1762ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:19.292861   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:19.292861   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:19.292861   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:19.292861   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:19.298216   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:19.299332   11992 pod_ready.go:92] pod "kube-proxy-6mpjv" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:19.299332   11992 pod_ready.go:81] duration metric: took 354.5372ms for pod "kube-proxy-6mpjv" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:19.299332   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77zxb" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:19.497154   11992 request.go:629] Waited for 197.815ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77zxb
	I0513 23:01:19.497501   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77zxb
	I0513 23:01:19.497501   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:19.497501   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:19.497501   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:19.503173   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:19.687111   11992 request.go:629] Waited for 182.9406ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:19.687352   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:19.687461   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:19.687461   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:19.687461   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:19.691930   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:19.692295   11992 pod_ready.go:92] pod "kube-proxy-77zxb" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:19.692295   11992 pod_ready.go:81] duration metric: took 392.9482ms for pod "kube-proxy-77zxb" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:19.692295   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:19.888519   11992 request.go:629] Waited for 196.2158ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300
	I0513 23:01:19.888914   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300
	I0513 23:01:19.888914   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:19.888914   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:19.888914   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:19.895307   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:01:20.091167   11992 request.go:629] Waited for 194.2281ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:20.091283   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:01:20.091283   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.091283   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.091580   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.097060   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:20.097060   11992 pod_ready.go:92] pod "kube-scheduler-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:20.097592   11992 pod_ready.go:81] duration metric: took 405.2804ms for pod "kube-scheduler-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:20.097592   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:20.294339   11992 request.go:629] Waited for 196.7396ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m02
	I0513 23:01:20.294339   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m02
	I0513 23:01:20.294339   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.294339   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.294339   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.298758   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:20.499306   11992 request.go:629] Waited for 199.177ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:20.499628   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:01:20.499628   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.499718   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.499718   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.504910   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:01:20.505596   11992 pod_ready.go:92] pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:01:20.505596   11992 pod_ready.go:81] duration metric: took 407.9874ms for pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:01:20.505596   11992 pod_ready.go:38] duration metric: took 9.6776033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:01:20.505686   11992 api_server.go:52] waiting for apiserver process to appear ...
	I0513 23:01:20.513650   11992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:01:20.537895   11992 api_server.go:72] duration metric: took 16.689765s to wait for apiserver process to appear ...
	I0513 23:01:20.537895   11992 api_server.go:88] waiting for apiserver healthz status ...
	I0513 23:01:20.537895   11992 api_server.go:253] Checking apiserver healthz at https://172.23.102.229:8443/healthz ...
	I0513 23:01:20.545795   11992 api_server.go:279] https://172.23.102.229:8443/healthz returned 200:
	ok
	I0513 23:01:20.545890   11992 round_trippers.go:463] GET https://172.23.102.229:8443/version
	I0513 23:01:20.545890   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.546001   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.546001   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.550028   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:01:20.550028   11992 api_server.go:141] control plane version: v1.30.0
	I0513 23:01:20.550028   11992 api_server.go:131] duration metric: took 12.1328ms to wait for apiserver health ...
	I0513 23:01:20.550028   11992 system_pods.go:43] waiting for kube-system pods to appear ...
	I0513 23:01:20.700929   11992 request.go:629] Waited for 150.6961ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:01:20.701011   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:01:20.701206   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.701206   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.701206   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.727755   11992 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0513 23:01:20.734753   11992 system_pods.go:59] 17 kube-system pods found
	I0513 23:01:20.734753   11992 system_pods.go:61] "coredns-7db6d8ff4d-4qbhd" [6fa6abce-1f7c-4119-b74c-e4e2275f77f4] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "coredns-7db6d8ff4d-wj8z7" [21d8cc35-f37a-42b6-9e44-dfce810d1d51] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "etcd-ha-586300" [a1809532-311c-4f80-9236-fec7256f7b3c] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "etcd-ha-586300-m02" [37b3bba9-35b3-4723-b954-94c4f45c9b96] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kindnet-8hh55" [4fb9a98f-06d4-4333-89dc-b90c8b880f92] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kindnet-vddtk" [bf6e57db-8270-4024-ba93-abce11d81513] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-apiserver-ha-586300" [d6659d47-ce69-4334-a35c-7b66898b49de] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-apiserver-ha-586300-m02" [0b8839d5-3133-4d52-9264-9d998bc54617] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-controller-manager-ha-586300" [3416887d-320b-4417-b6ba-ffabb7b84885] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-controller-manager-ha-586300-m02" [eccf51fc-16b7-4d89-95ab-59ec4e8fbc8c] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-proxy-6mpjv" [0cd7eb37-2ff4-487e-b5e6-9d71c69a4814] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-proxy-77zxb" [bc2480b2-3de0-49c4-b84e-8ae7e85829a1] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-scheduler-ha-586300" [8bb322de-7dd8-4780-ae04-9d18a293aa0b] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-scheduler-ha-586300-m02" [c3bb6486-257a-4993-9127-34dada81473a] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-vip-ha-586300" [5dfa662f-0df1-485a-a52b-fdcd87e23145] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "kube-vip-ha-586300-m02" [4372ac88-49f7-4dcd-9c13-1b8484817d28] Running
	I0513 23:01:20.734753   11992 system_pods.go:61] "storage-provisioner" [fc11360c-19a1-4d0b-966e-49946c8b0d47] Running
	I0513 23:01:20.734753   11992 system_pods.go:74] duration metric: took 184.7177ms to wait for pod list to return data ...
	I0513 23:01:20.734753   11992 default_sa.go:34] waiting for default service account to be created ...
	I0513 23:01:20.890784   11992 request.go:629] Waited for 155.7775ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/default/serviceaccounts
	I0513 23:01:20.891114   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/default/serviceaccounts
	I0513 23:01:20.891114   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:20.891114   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:20.891114   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:20.898491   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:01:20.898491   11992 default_sa.go:45] found service account: "default"
	I0513 23:01:20.898491   11992 default_sa.go:55] duration metric: took 163.732ms for default service account to be created ...
	I0513 23:01:20.898491   11992 system_pods.go:116] waiting for k8s-apps to be running ...
	I0513 23:01:21.094997   11992 request.go:629] Waited for 196.2701ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:01:21.094997   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:01:21.094997   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:21.094997   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:21.095123   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:21.103184   11992 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:01:21.109523   11992 system_pods.go:86] 17 kube-system pods found
	I0513 23:01:21.109592   11992 system_pods.go:89] "coredns-7db6d8ff4d-4qbhd" [6fa6abce-1f7c-4119-b74c-e4e2275f77f4] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "coredns-7db6d8ff4d-wj8z7" [21d8cc35-f37a-42b6-9e44-dfce810d1d51] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "etcd-ha-586300" [a1809532-311c-4f80-9236-fec7256f7b3c] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "etcd-ha-586300-m02" [37b3bba9-35b3-4723-b954-94c4f45c9b96] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kindnet-8hh55" [4fb9a98f-06d4-4333-89dc-b90c8b880f92] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kindnet-vddtk" [bf6e57db-8270-4024-ba93-abce11d81513] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-apiserver-ha-586300" [d6659d47-ce69-4334-a35c-7b66898b49de] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-apiserver-ha-586300-m02" [0b8839d5-3133-4d52-9264-9d998bc54617] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-controller-manager-ha-586300" [3416887d-320b-4417-b6ba-ffabb7b84885] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-controller-manager-ha-586300-m02" [eccf51fc-16b7-4d89-95ab-59ec4e8fbc8c] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-proxy-6mpjv" [0cd7eb37-2ff4-487e-b5e6-9d71c69a4814] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-proxy-77zxb" [bc2480b2-3de0-49c4-b84e-8ae7e85829a1] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-scheduler-ha-586300" [8bb322de-7dd8-4780-ae04-9d18a293aa0b] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-scheduler-ha-586300-m02" [c3bb6486-257a-4993-9127-34dada81473a] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-vip-ha-586300" [5dfa662f-0df1-485a-a52b-fdcd87e23145] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "kube-vip-ha-586300-m02" [4372ac88-49f7-4dcd-9c13-1b8484817d28] Running
	I0513 23:01:21.109592   11992 system_pods.go:89] "storage-provisioner" [fc11360c-19a1-4d0b-966e-49946c8b0d47] Running
	I0513 23:01:21.109592   11992 system_pods.go:126] duration metric: took 211.0922ms to wait for k8s-apps to be running ...
	I0513 23:01:21.109592   11992 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 23:01:21.117516   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:01:21.142542   11992 system_svc.go:56] duration metric: took 32.9482ms WaitForService to wait for kubelet
	I0513 23:01:21.142641   11992 kubeadm.go:576] duration metric: took 17.2944876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:01:21.142641   11992 node_conditions.go:102] verifying NodePressure condition ...
	I0513 23:01:21.298495   11992 request.go:629] Waited for 155.5894ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes
	I0513 23:01:21.298495   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes
	I0513 23:01:21.298495   11992 round_trippers.go:469] Request Headers:
	I0513 23:01:21.298495   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:01:21.298608   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:01:21.306173   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:01:21.307269   11992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:01:21.307269   11992 node_conditions.go:123] node cpu capacity is 2
	I0513 23:01:21.307269   11992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:01:21.307269   11992 node_conditions.go:123] node cpu capacity is 2
	I0513 23:01:21.307269   11992 node_conditions.go:105] duration metric: took 164.6215ms to run NodePressure ...
	I0513 23:01:21.307269   11992 start.go:240] waiting for startup goroutines ...
	I0513 23:01:21.307269   11992 start.go:254] writing updated cluster config ...
	I0513 23:01:21.311014   11992 out.go:177] 
	I0513 23:01:21.326682   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:01:21.326682   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 23:01:21.332189   11992 out.go:177] * Starting "ha-586300-m03" control-plane node in "ha-586300" cluster
	I0513 23:01:21.335224   11992 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:01:21.335224   11992 cache.go:56] Caching tarball of preloaded images
	I0513 23:01:21.335852   11992 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 23:01:21.335884   11992 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 23:01:21.335884   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 23:01:21.342120   11992 start.go:360] acquireMachinesLock for ha-586300-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 23:01:21.342120   11992 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-586300-m03"
	I0513 23:01:21.342120   11992 start.go:93] Provisioning new machine with config: &{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:def
ault APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.108.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:01:21.342120   11992 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0513 23:01:21.345591   11992 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 23:01:21.345961   11992 start.go:159] libmachine.API.Create for "ha-586300" (driver="hyperv")
	I0513 23:01:21.345995   11992 client.go:168] LocalClient.Create starting
	I0513 23:01:21.346381   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0513 23:01:21.346409   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 23:01:21.346409   11992 main.go:141] libmachine: Parsing certificate...
	I0513 23:01:21.346409   11992 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0513 23:01:21.346409   11992 main.go:141] libmachine: Decoding PEM data...
	I0513 23:01:21.346409   11992 main.go:141] libmachine: Parsing certificate...
	I0513 23:01:21.346952   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0513 23:01:23.083890   11992 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0513 23:01:23.083890   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:23.084005   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0513 23:01:24.639184   11992 main.go:141] libmachine: [stdout =====>] : False
	
	I0513 23:01:24.639716   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:24.639716   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 23:01:25.964660   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 23:01:25.964660   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:25.965308   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 23:01:29.281155   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 23:01:29.281155   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:29.282732   11992 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0513 23:01:29.593834   11992 main.go:141] libmachine: Creating SSH key...
	I0513 23:01:29.731958   11992 main.go:141] libmachine: Creating VM...
	I0513 23:01:29.732952   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 23:01:32.334634   11992 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 23:01:32.334634   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:32.334722   11992 main.go:141] libmachine: Using switch "Default Switch"
	I0513 23:01:32.334808   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 23:01:33.921718   11992 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 23:01:33.921806   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:33.921806   11992 main.go:141] libmachine: Creating VHD
	I0513 23:01:33.921806   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0513 23:01:37.448100   11992 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DECC4003-BBC9-4CBF-844E-AF81776EB307
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0513 23:01:37.448100   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:37.448100   11992 main.go:141] libmachine: Writing magic tar header
	I0513 23:01:37.449095   11992 main.go:141] libmachine: Writing SSH key tar header
	I0513 23:01:37.459091   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0513 23:01:40.427049   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:40.427858   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:40.427858   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\disk.vhd' -SizeBytes 20000MB
	I0513 23:01:42.765170   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:42.765170   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:42.765170   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-586300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0513 23:01:46.022543   11992 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-586300-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0513 23:01:46.023465   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:46.023568   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-586300-m03 -DynamicMemoryEnabled $false
	I0513 23:01:48.072299   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:48.072384   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:48.072465   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-586300-m03 -Count 2
	I0513 23:01:50.089656   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:50.090506   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:50.090684   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-586300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\boot2docker.iso'
	I0513 23:01:52.414416   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:52.414924   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:52.415074   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-586300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\disk.vhd'
	I0513 23:01:54.778291   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:54.778291   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:54.778291   11992 main.go:141] libmachine: Starting VM...
	I0513 23:01:54.778674   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-586300-m03
	I0513 23:01:57.638972   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:01:57.639785   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:57.639785   11992 main.go:141] libmachine: Waiting for host to start...
	I0513 23:01:57.639831   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:01:59.668007   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:01:59.668825   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:01:59.668825   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:01.925456   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:02:01.925456   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:02.925523   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:04.907572   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:04.907961   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:04.908056   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:07.181920   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:02:07.181920   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:08.191255   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:10.137087   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:10.137087   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:10.137164   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:12.396002   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:02:12.396002   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:13.396954   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:15.388418   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:15.388418   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:15.388418   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:17.673950   11992 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:02:17.673950   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:18.687274   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:20.678981   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:20.679164   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:20.679164   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:23.058989   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:23.059022   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:23.059093   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:25.001432   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:25.001432   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:25.001432   11992 machine.go:94] provisionDockerMachine start ...
	I0513 23:02:25.001618   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:26.941534   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:26.942247   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:26.942247   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:29.247392   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:29.247392   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:29.251796   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:02:29.252096   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:02:29.252096   11992 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 23:02:29.383765   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 23:02:29.383848   11992 buildroot.go:166] provisioning hostname "ha-586300-m03"
	I0513 23:02:29.383848   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:31.319577   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:31.319883   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:31.319883   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:33.570688   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:33.570688   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:33.574995   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:02:33.575391   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:02:33.575463   11992 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-586300-m03 && echo "ha-586300-m03" | sudo tee /etc/hostname
	I0513 23:02:33.746483   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-586300-m03
	
	I0513 23:02:33.746483   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:35.646184   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:35.646184   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:35.646263   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:37.961568   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:37.961889   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:37.965584   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:02:37.966102   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:02:37.966102   11992 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-586300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-586300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-586300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 23:02:38.111516   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 23:02:38.111597   11992 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 23:02:38.111661   11992 buildroot.go:174] setting up certificates
	I0513 23:02:38.111661   11992 provision.go:84] configureAuth start
	I0513 23:02:38.111733   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:40.003471   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:40.004441   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:40.004536   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:42.266168   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:42.266168   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:42.266168   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:44.175565   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:44.176044   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:44.176044   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:46.473899   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:46.474407   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:46.474407   11992 provision.go:143] copyHostCerts
	I0513 23:02:46.474545   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0513 23:02:46.474830   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0513 23:02:46.474830   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0513 23:02:46.475239   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 23:02:46.476165   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0513 23:02:46.476576   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0513 23:02:46.476576   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0513 23:02:46.476774   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 23:02:46.477836   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0513 23:02:46.478167   11992 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0513 23:02:46.478239   11992 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0513 23:02:46.478713   11992 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 23:02:46.479451   11992 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-586300-m03 san=[127.0.0.1 172.23.109.129 ha-586300-m03 localhost minikube]
	I0513 23:02:46.604874   11992 provision.go:177] copyRemoteCerts
	I0513 23:02:46.612818   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 23:02:46.612818   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:48.545088   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:48.545088   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:48.545356   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:50.879996   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:50.880488   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:50.880792   11992 sshutil.go:53] new ssh client: &{IP:172.23.109.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\id_rsa Username:docker}
	I0513 23:02:50.992674   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3796833s)
	I0513 23:02:50.992674   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 23:02:50.993208   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0513 23:02:51.036147   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 23:02:51.036147   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 23:02:51.083622   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 23:02:51.083892   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 23:02:51.130883   11992 provision.go:87] duration metric: took 13.0186466s to configureAuth
	I0513 23:02:51.130883   11992 buildroot.go:189] setting minikube options for container-runtime
	I0513 23:02:51.131086   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:02:51.131630   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:53.038183   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:53.038372   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:53.038451   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:55.339732   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:55.339781   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:55.342960   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:02:55.343560   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:02:55.343560   11992 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 23:02:55.475089   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 23:02:55.475089   11992 buildroot.go:70] root file system type: tmpfs
	I0513 23:02:55.476069   11992 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 23:02:55.476069   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:02:57.380093   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:02:57.380309   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:57.380309   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:02:59.720715   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:02:59.721090   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:02:59.724994   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:02:59.725229   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:02:59.725229   11992 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.102.229"
	Environment="NO_PROXY=172.23.102.229,172.23.108.68"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 23:02:59.885592   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.102.229
	Environment=NO_PROXY=172.23.102.229,172.23.108.68
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 23:02:59.885592   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:01.821417   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:01.821417   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:01.821498   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:04.160541   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:04.160541   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:04.165151   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:03:04.165489   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:03:04.165564   11992 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 23:03:06.285964   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 23:03:06.286053   11992 machine.go:97] duration metric: took 41.2829883s to provisionDockerMachine
	I0513 23:03:06.286053   11992 client.go:171] duration metric: took 1m44.9358552s to LocalClient.Create
	I0513 23:03:06.286118   11992 start.go:167] duration metric: took 1m44.9359933s to libmachine.API.Create "ha-586300"
	I0513 23:03:06.286118   11992 start.go:293] postStartSetup for "ha-586300-m03" (driver="hyperv")
	I0513 23:03:06.286256   11992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 23:03:06.294858   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 23:03:06.294858   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:08.278888   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:08.279036   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:08.279036   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:10.647475   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:10.648523   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:10.648577   11992 sshutil.go:53] new ssh client: &{IP:172.23.109.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\id_rsa Username:docker}
	I0513 23:03:10.769205   11992 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4741709s)
	I0513 23:03:10.778279   11992 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 23:03:10.785301   11992 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 23:03:10.785392   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 23:03:10.785694   11992 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 23:03:10.786380   11992 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0513 23:03:10.786380   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0513 23:03:10.795035   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 23:03:10.812928   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0513 23:03:10.858558   11992 start.go:296] duration metric: took 4.5721209s for postStartSetup
	I0513 23:03:10.860972   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:12.839515   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:12.839953   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:12.839953   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:15.137463   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:15.137463   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:15.138090   11992 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\config.json ...
	I0513 23:03:15.140750   11992 start.go:128] duration metric: took 1m53.7941169s to createHost
	I0513 23:03:15.140852   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:17.052586   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:17.052586   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:17.053216   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:19.367766   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:19.367766   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:19.371997   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:03:19.372433   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:03:19.372433   11992 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 23:03:19.509989   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715641399.682594913
	
	I0513 23:03:19.509989   11992 fix.go:216] guest clock: 1715641399.682594913
	I0513 23:03:19.509989   11992 fix.go:229] Guest: 2024-05-13 23:03:19.682594913 +0000 UTC Remote: 2024-05-13 23:03:15.1407505 +0000 UTC m=+515.208189301 (delta=4.541844413s)
	I0513 23:03:19.510528   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:21.409957   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:21.409957   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:21.410041   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:23.703614   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:23.703614   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:23.707590   11992 main.go:141] libmachine: Using SSH client type: native
	I0513 23:03:23.707799   11992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.129 22 <nil> <nil>}
	I0513 23:03:23.707799   11992 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715641399
	I0513 23:03:23.856021   11992 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 23:03:19 UTC 2024
	
	I0513 23:03:23.856132   11992 fix.go:236] clock set: Mon May 13 23:03:19 UTC 2024
	 (err=<nil>)
	I0513 23:03:23.856132   11992 start.go:83] releasing machines lock for "ha-586300-m03", held for 2m2.5091546s
	I0513 23:03:23.856275   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:25.793336   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:25.793336   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:25.793336   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:28.121905   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:28.121905   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:28.125397   11992 out.go:177] * Found network options:
	I0513 23:03:28.127890   11992 out.go:177]   - NO_PROXY=172.23.102.229,172.23.108.68
	W0513 23:03:28.129707   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 23:03:28.129707   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	I0513 23:03:28.131849   11992 out.go:177]   - NO_PROXY=172.23.102.229,172.23.108.68
	W0513 23:03:28.135272   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 23:03:28.135384   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 23:03:28.137846   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 23:03:28.137846   11992 proxy.go:119] fail to check proxy env: Error ip not in block
	I0513 23:03:28.139697   11992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 23:03:28.139697   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:28.146844   11992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0513 23:03:28.146844   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:03:30.124156   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:30.124156   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:30.124349   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:30.147482   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:30.147482   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:30.147482   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:32.510188   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:32.510188   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:32.510469   11992 sshutil.go:53] new ssh client: &{IP:172.23.109.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\id_rsa Username:docker}
	I0513 23:03:32.537168   11992 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:03:32.537255   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:32.537771   11992 sshutil.go:53] new ssh client: &{IP:172.23.109.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\id_rsa Username:docker}
	I0513 23:03:32.691094   11992 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5439906s)
	I0513 23:03:32.691162   11992 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5512172s)
	W0513 23:03:32.691162   11992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 23:03:32.699962   11992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 23:03:32.728308   11992 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 23:03:32.728308   11992 start.go:494] detecting cgroup driver to use...
	I0513 23:03:32.728308   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:03:32.772226   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 23:03:32.804831   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 23:03:32.826837   11992 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 23:03:32.837151   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 23:03:32.864040   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:03:32.896905   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 23:03:32.924026   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:03:32.961588   11992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 23:03:32.996868   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 23:03:33.026582   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 23:03:33.052584   11992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 23:03:33.081314   11992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 23:03:33.109916   11992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 23:03:33.136818   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:03:33.312615   11992 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 23:03:33.343531   11992 start.go:494] detecting cgroup driver to use...
	I0513 23:03:33.352386   11992 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 23:03:33.383406   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:03:33.413864   11992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 23:03:33.450055   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:03:33.480675   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:03:33.512385   11992 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 23:03:33.567171   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:03:33.590983   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:03:33.635608   11992 ssh_runner.go:195] Run: which cri-dockerd
	I0513 23:03:33.650594   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 23:03:33.671225   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 23:03:33.711697   11992 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 23:03:33.891985   11992 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 23:03:34.056859   11992 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 23:03:34.056859   11992 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 23:03:34.095674   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:03:34.277063   11992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 23:03:36.788096   11992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5109342s)
	I0513 23:03:36.797029   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 23:03:36.834629   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:03:36.864936   11992 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 23:03:37.058361   11992 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 23:03:37.257096   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:03:37.447902   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 23:03:37.485604   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:03:37.517731   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:03:37.704688   11992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 23:03:37.810519   11992 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 23:03:37.822568   11992 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 23:03:37.830126   11992 start.go:562] Will wait 60s for crictl version
	I0513 23:03:37.838770   11992 ssh_runner.go:195] Run: which crictl
	I0513 23:03:37.861035   11992 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 23:03:37.915612   11992 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 23:03:37.923611   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:03:37.966270   11992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:03:37.999973   11992 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 23:03:38.004306   11992 out.go:177]   - env NO_PROXY=172.23.102.229
	I0513 23:03:38.007563   11992 out.go:177]   - env NO_PROXY=172.23.102.229,172.23.108.68
	I0513 23:03:38.010575   11992 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 23:03:38.014330   11992 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 23:03:38.015329   11992 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 23:03:38.015329   11992 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 23:03:38.015329   11992 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 23:03:38.017063   11992 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 23:03:38.017063   11992 ip.go:210] interface addr: 172.23.96.1/20
	I0513 23:03:38.028728   11992 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 23:03:38.035142   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:03:38.061264   11992 mustload.go:65] Loading cluster: ha-586300
	I0513 23:03:38.061786   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:03:38.062003   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:03:40.006439   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:40.007238   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:40.007238   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:03:40.007382   11992 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300 for IP: 172.23.109.129
	I0513 23:03:40.007382   11992 certs.go:194] generating shared ca certs ...
	I0513 23:03:40.007382   11992 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:03:40.008337   11992 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 23:03:40.008603   11992 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 23:03:40.008697   11992 certs.go:256] generating profile certs ...
	I0513 23:03:40.009260   11992 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\client.key
	I0513 23:03:40.009333   11992 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.53a5741f
	I0513 23:03:40.009430   11992 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.53a5741f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.102.229 172.23.108.68 172.23.109.129 172.23.111.254]
	I0513 23:03:40.148115   11992 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.53a5741f ...
	I0513 23:03:40.148115   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.53a5741f: {Name:mk28c00991499451c4a682477df67fc5ce29b66c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:03:40.150112   11992 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.53a5741f ...
	I0513 23:03:40.150112   11992 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.53a5741f: {Name:mk10a0e3613314d7e3609376ac35f790fbf46370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:03:40.150468   11992 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt.53a5741f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt
	I0513 23:03:40.164561   11992 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key.53a5741f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key
	I0513 23:03:40.165557   11992 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key
	I0513 23:03:40.165557   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 23:03:40.165557   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 23:03:40.165557   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 23:03:40.165557   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 23:03:40.166564   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0513 23:03:40.166564   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0513 23:03:40.166564   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0513 23:03:40.166564   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0513 23:03:40.167920   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0513 23:03:40.168272   11992 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0513 23:03:40.168371   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 23:03:40.168527   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 23:03:40.168527   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 23:03:40.169069   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 23:03:40.169585   11992 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0513 23:03:40.169774   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:03:40.169964   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0513 23:03:40.170135   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0513 23:03:40.170135   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:03:42.139887   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:42.139887   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:42.140027   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:44.536089   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:03:44.536089   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:44.536089   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 23:03:44.644441   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0513 23:03:44.652312   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0513 23:03:44.680447   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0513 23:03:44.687343   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0513 23:03:44.714890   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0513 23:03:44.722701   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0513 23:03:44.749215   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0513 23:03:44.755490   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0513 23:03:44.783327   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0513 23:03:44.789739   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0513 23:03:44.817169   11992 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0513 23:03:44.823471   11992 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0513 23:03:44.843825   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 23:03:44.891578   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 23:03:44.937727   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 23:03:44.983143   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 23:03:45.028970   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0513 23:03:45.076500   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0513 23:03:45.124489   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 23:03:45.174081   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-586300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0513 23:03:45.219276   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 23:03:45.266676   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0513 23:03:45.316744   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0513 23:03:45.361143   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0513 23:03:45.390832   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0513 23:03:45.423697   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0513 23:03:45.454275   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0513 23:03:45.488020   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0513 23:03:45.518417   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0513 23:03:45.551122   11992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0513 23:03:45.596609   11992 ssh_runner.go:195] Run: openssl version
	I0513 23:03:45.613353   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0513 23:03:45.644260   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0513 23:03:45.650743   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 23:03:45.661386   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0513 23:03:45.678597   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 23:03:45.709014   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 23:03:45.735579   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:03:45.742754   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:03:45.750554   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:03:45.769896   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 23:03:45.796869   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0513 23:03:45.830663   11992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0513 23:03:45.837116   11992 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 23:03:45.845371   11992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0513 23:03:45.864544   11992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0513 23:03:45.898702   11992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 23:03:45.904992   11992 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 23:03:45.904992   11992 kubeadm.go:928] updating node {m03 172.23.109.129 8443 v1.30.0 docker true true} ...
	I0513 23:03:45.904992   11992 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-586300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.109.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 23:03:45.904992   11992 kube-vip.go:115] generating kube-vip config ...
	I0513 23:03:45.913605   11992 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0513 23:03:45.940720   11992 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0513 23:03:45.940720   11992 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0513 23:03:45.949012   11992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 23:03:45.968221   11992 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0513 23:03:45.977770   11992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0513 23:03:45.995542   11992 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0513 23:03:45.995542   11992 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0513 23:03:45.995542   11992 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0513 23:03:45.995542   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0513 23:03:45.996084   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0513 23:03:46.009482   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0513 23:03:46.009482   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0513 23:03:46.010689   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:03:46.016039   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0513 23:03:46.016629   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0513 23:03:46.052703   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0513 23:03:46.052801   11992 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0513 23:03:46.052905   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0513 23:03:46.063056   11992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0513 23:03:46.130328   11992 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0513 23:03:46.130430   11992 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0513 23:03:47.195166   11992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0513 23:03:47.213268   11992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0513 23:03:47.245200   11992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 23:03:47.276900   11992 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0513 23:03:47.319019   11992 ssh_runner.go:195] Run: grep 172.23.111.254	control-plane.minikube.internal$ /etc/hosts
	I0513 23:03:47.326581   11992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:03:47.357569   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:03:47.555814   11992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:03:47.594834   11992 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:03:47.595526   11992 start.go:316] joinCluster: &{Name:ha-586300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-586300 Namespace:default APIServerHAVIP:172.
23.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.229 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.108.68 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.23.109.129 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:03:47.595672   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0513 23:03:47.595739   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:03:49.539057   11992 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:03:49.539880   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:49.539964   11992 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:03:51.886042   11992 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:03:51.886042   11992 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:03:51.886999   11992 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 23:03:52.111019   11992 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5151698s)
	I0513 23:03:52.111250   11992 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.23.109.129 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:03:52.111324   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7xcf6m.hzv0vmsdgs1e9s3x --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-586300-m03 --control-plane --apiserver-advertise-address=172.23.109.129 --apiserver-bind-port=8443"
	I0513 23:04:35.498305   11992 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7xcf6m.hzv0vmsdgs1e9s3x --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-586300-m03 --control-plane --apiserver-advertise-address=172.23.109.129 --apiserver-bind-port=8443": (43.3851762s)
	I0513 23:04:35.498378   11992 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0513 23:04:36.271743   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-586300-m03 minikube.k8s.io/updated_at=2024_05_13T23_04_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=ha-586300 minikube.k8s.io/primary=false
	I0513 23:04:36.443057   11992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-586300-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0513 23:04:36.599343   11992 start.go:318] duration metric: took 49.001961s to joinCluster
	I0513 23:04:36.599460   11992 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.23.109.129 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:04:36.603316   11992 out.go:177] * Verifying Kubernetes components...
	I0513 23:04:36.600510   11992 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:04:36.615543   11992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:04:37.004731   11992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:04:37.053713   11992 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:04:37.053713   11992 kapi.go:59] client config for ha-586300: &rest.Config{Host:"https://172.23.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-586300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0513 23:04:37.053713   11992 kubeadm.go:477] Overriding stale ClientConfig host https://172.23.111.254:8443 with https://172.23.102.229:8443
	I0513 23:04:37.054718   11992 node_ready.go:35] waiting up to 6m0s for node "ha-586300-m03" to be "Ready" ...
	I0513 23:04:37.054718   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:37.054718   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:37.054718   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:37.054718   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:37.070156   11992 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0513 23:04:37.564635   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:37.564635   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:37.564635   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:37.564635   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:37.568214   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:38.055679   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:38.055679   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:38.055679   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:38.055679   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:38.063944   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:04:38.560907   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:38.560907   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:38.560907   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:38.560907   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:38.568665   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:04:39.068482   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:39.068482   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:39.068482   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:39.068482   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:39.075083   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:04:39.076533   11992 node_ready.go:53] node "ha-586300-m03" has status "Ready":"False"
	I0513 23:04:39.557119   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:39.557335   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:39.557335   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:39.557335   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:39.578560   11992 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0513 23:04:40.060304   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:40.060304   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:40.060304   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:40.060304   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:40.065323   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:40.568727   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:40.568727   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:40.568727   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:40.568825   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:40.571341   11992 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:04:41.057195   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:41.057248   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:41.057248   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:41.057248   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:41.065729   11992 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:04:41.557705   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:41.557861   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:41.557861   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:41.557861   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:41.564466   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:04:41.565459   11992 node_ready.go:53] node "ha-586300-m03" has status "Ready":"False"
	I0513 23:04:42.062638   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:42.062772   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:42.062772   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:42.062772   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:42.066697   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:42.563736   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:42.564133   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:42.564133   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:42.564218   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:42.569442   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:43.057546   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:43.057603   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.057661   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.057720   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.069762   11992 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0513 23:04:43.070434   11992 node_ready.go:49] node "ha-586300-m03" has status "Ready":"True"
	I0513 23:04:43.070434   11992 node_ready.go:38] duration metric: took 6.0154805s for node "ha-586300-m03" to be "Ready" ...
	I0513 23:04:43.070434   11992 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:04:43.070571   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:04:43.070571   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.070648   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.070648   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.082361   11992 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0513 23:04:43.090435   11992 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.090435   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4qbhd
	I0513 23:04:43.090435   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.090435   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.090435   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.094369   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:43.095374   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:43.095374   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.095374   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.095374   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.099374   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:43.100374   11992 pod_ready.go:92] pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:43.100374   11992 pod_ready.go:81] duration metric: took 9.9389ms for pod "coredns-7db6d8ff4d-4qbhd" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.100374   11992 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.100374   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wj8z7
	I0513 23:04:43.100374   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.100374   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.100374   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.104368   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:43.104368   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:43.104368   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.105437   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.105437   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.111362   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:43.112353   11992 pod_ready.go:92] pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:43.112353   11992 pod_ready.go:81] duration metric: took 11.9788ms for pod "coredns-7db6d8ff4d-wj8z7" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.112353   11992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.112353   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300
	I0513 23:04:43.112353   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.112353   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.112353   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.118346   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:43.119547   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:43.119547   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.119547   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.119547   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.123057   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:43.125877   11992 pod_ready.go:92] pod "etcd-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:43.125949   11992 pod_ready.go:81] duration metric: took 13.5958ms for pod "etcd-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.126009   11992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.126142   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m02
	I0513 23:04:43.126142   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.126142   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.126142   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.129366   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:43.130366   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:43.130366   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.130366   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.130366   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.133368   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:43.134363   11992 pod_ready.go:92] pod "etcd-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:43.134363   11992 pod_ready.go:81] duration metric: took 8.3538ms for pod "etcd-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.134363   11992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:43.262567   11992 request.go:629] Waited for 128.1618ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:43.262651   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:43.262651   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.262651   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.262651   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.268409   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:43.466297   11992 request.go:629] Waited for 196.8581ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:43.466500   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:43.466500   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.466580   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.466580   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.471873   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:43.672837   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:43.672938   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.672938   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.672938   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.678011   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:43.859711   11992 request.go:629] Waited for 180.2859ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:43.859821   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:43.859821   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:43.860044   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:43.860044   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:43.865613   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:44.140211   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:44.140298   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:44.140298   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:44.140298   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:44.147322   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:04:44.264537   11992 request.go:629] Waited for 115.8502ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:44.264845   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:44.264845   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:44.264933   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:44.264933   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:44.270590   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:44.638343   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:44.638343   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:44.638343   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:44.638343   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:44.644923   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:04:44.669130   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:44.669130   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:44.669130   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:44.669450   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:44.672596   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:45.137750   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:45.137750   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.137750   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.137750   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.157212   11992 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0513 23:04:45.157903   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:45.158003   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.158003   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.158003   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.161196   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:45.162762   11992 pod_ready.go:102] pod "etcd-ha-586300-m03" in "kube-system" namespace has status "Ready":"False"
	I0513 23:04:45.636780   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/etcd-ha-586300-m03
	I0513 23:04:45.636980   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.636980   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.636980   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.640245   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:45.641761   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:45.641838   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.641838   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.641838   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.644940   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:45.645834   11992 pod_ready.go:92] pod "etcd-ha-586300-m03" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:45.645935   11992 pod_ready.go:81] duration metric: took 2.5114732s for pod "etcd-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:45.645935   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:45.667276   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300
	I0513 23:04:45.667276   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.667276   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.667276   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.677992   11992 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0513 23:04:45.871076   11992 request.go:629] Waited for 192.2872ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:45.871197   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:45.871365   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:45.871365   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:45.871365   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:45.876769   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:45.878173   11992 pod_ready.go:92] pod "kube-apiserver-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:45.878281   11992 pod_ready.go:81] duration metric: took 232.2294ms for pod "kube-apiserver-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:45.878281   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:46.072347   11992 request.go:629] Waited for 193.8518ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m02
	I0513 23:04:46.072347   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m02
	I0513 23:04:46.072347   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:46.072347   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:46.072347   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:46.077268   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:46.263166   11992 request.go:629] Waited for 183.9502ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:46.263559   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:46.263559   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:46.263559   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:46.263559   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:46.268862   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:46.269607   11992 pod_ready.go:92] pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:46.269720   11992 pod_ready.go:81] duration metric: took 391.4232ms for pod "kube-apiserver-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:46.269720   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:46.463470   11992 request.go:629] Waited for 193.7425ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:46.463470   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:46.463470   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:46.463470   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:46.463729   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:46.470471   11992 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:04:46.665400   11992 request.go:629] Waited for 193.4979ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:46.665698   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:46.665698   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:46.665698   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:46.665698   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:46.670458   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:46.870757   11992 request.go:629] Waited for 93.5191ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:46.871001   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:46.871001   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:46.871109   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:46.871109   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:46.875768   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:47.058735   11992 request.go:629] Waited for 181.0397ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.059077   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.059169   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:47.059169   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:47.059169   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:47.063545   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:47.274723   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:47.274723   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:47.274723   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:47.274831   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:47.280267   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:47.459963   11992 request.go:629] Waited for 176.7155ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.459963   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.459963   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:47.459963   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:47.459963   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:47.464600   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:47.773681   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:47.773681   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:47.773681   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:47.773681   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:47.779445   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:47.867743   11992 request.go:629] Waited for 87.2819ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.867944   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:47.868057   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:47.868057   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:47.868057   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:47.874224   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:48.274301   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:48.274301   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:48.274301   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:48.274301   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:48.279442   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:48.280919   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:48.280981   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:48.280981   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:48.280981   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:48.284450   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:48.285612   11992 pod_ready.go:102] pod "kube-apiserver-ha-586300-m03" in "kube-system" namespace has status "Ready":"False"
	I0513 23:04:48.779396   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-586300-m03
	I0513 23:04:48.779396   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:48.779396   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:48.779396   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:48.782969   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:48.784308   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:48.784308   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:48.784308   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:48.784308   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:48.788456   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:48.790095   11992 pod_ready.go:92] pod "kube-apiserver-ha-586300-m03" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:48.790149   11992 pod_ready.go:81] duration metric: took 2.5203078s for pod "kube-apiserver-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:48.790149   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:48.872612   11992 request.go:629] Waited for 82.4601ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300
	I0513 23:04:48.872877   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300
	I0513 23:04:48.872877   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:48.872877   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:48.872877   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:48.887490   11992 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0513 23:04:49.059927   11992 request.go:629] Waited for 171.3573ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:49.060110   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:49.060110   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:49.060110   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:49.060172   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:49.063488   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:49.065187   11992 pod_ready.go:92] pod "kube-controller-manager-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:49.065263   11992 pod_ready.go:81] duration metric: took 275.1031ms for pod "kube-controller-manager-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:49.065263   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:49.265580   11992 request.go:629] Waited for 200.157ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300-m02
	I0513 23:04:49.265916   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300-m02
	I0513 23:04:49.265916   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:49.265916   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:49.265916   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:49.270289   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:49.469526   11992 request.go:629] Waited for 197.6993ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:49.469657   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:49.469719   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:49.469719   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:49.469807   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:49.475058   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:49.476117   11992 pod_ready.go:92] pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:49.476117   11992 pod_ready.go:81] duration metric: took 410.838ms for pod "kube-controller-manager-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:49.476117   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:49.672419   11992 request.go:629] Waited for 196.1296ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300-m03
	I0513 23:04:49.672419   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-586300-m03
	I0513 23:04:49.672419   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:49.672419   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:49.672419   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:49.677016   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:49.861399   11992 request.go:629] Waited for 182.8773ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:49.861399   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:49.861399   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:49.861399   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:49.861399   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:49.866016   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:49.866016   11992 pod_ready.go:92] pod "kube-controller-manager-ha-586300-m03" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:49.866016   11992 pod_ready.go:81] duration metric: took 389.8836ms for pod "kube-controller-manager-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:49.866016   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2tqlw" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:50.064581   11992 request.go:629] Waited for 198.4283ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tqlw
	I0513 23:04:50.064676   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tqlw
	I0513 23:04:50.064676   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:50.064676   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:50.064676   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:50.075617   11992 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0513 23:04:50.271318   11992 request.go:629] Waited for 192.4575ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:50.271677   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:50.271677   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:50.271677   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:50.271677   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:50.277754   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:50.278308   11992 pod_ready.go:92] pod "kube-proxy-2tqlw" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:50.278408   11992 pod_ready.go:81] duration metric: took 412.3767ms for pod "kube-proxy-2tqlw" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:50.278408   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6mpjv" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:50.460514   11992 request.go:629] Waited for 182.0316ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6mpjv
	I0513 23:04:50.460719   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6mpjv
	I0513 23:04:50.460719   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:50.460719   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:50.460719   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:50.468546   11992 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:04:50.667493   11992 request.go:629] Waited for 197.7412ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:50.667662   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:50.667662   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:50.667662   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:50.667662   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:50.671986   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:50.673543   11992 pod_ready.go:92] pod "kube-proxy-6mpjv" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:50.673543   11992 pod_ready.go:81] duration metric: took 395.1195ms for pod "kube-proxy-6mpjv" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:50.673621   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77zxb" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:50.871788   11992 request.go:629] Waited for 198.0932ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77zxb
	I0513 23:04:50.871788   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77zxb
	I0513 23:04:50.871980   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:50.871980   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:50.871980   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:50.876397   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:51.061548   11992 request.go:629] Waited for 183.0864ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:51.061929   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:51.062101   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:51.062101   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:51.062101   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:51.066933   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:51.068422   11992 pod_ready.go:92] pod "kube-proxy-77zxb" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:51.068422   11992 pod_ready.go:81] duration metric: took 394.7847ms for pod "kube-proxy-77zxb" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:51.068422   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:51.267021   11992 request.go:629] Waited for 197.9177ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300
	I0513 23:04:51.267271   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300
	I0513 23:04:51.267340   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:51.267409   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:51.267434   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:51.272539   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:51.470269   11992 request.go:629] Waited for 196.661ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:51.470269   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300
	I0513 23:04:51.470269   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:51.470269   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:51.470269   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:51.476105   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:51.477231   11992 pod_ready.go:92] pod "kube-scheduler-ha-586300" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:51.477340   11992 pod_ready.go:81] duration metric: took 408.9021ms for pod "kube-scheduler-ha-586300" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:51.477340   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:51.673228   11992 request.go:629] Waited for 195.6459ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m02
	I0513 23:04:51.673381   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m02
	I0513 23:04:51.673599   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:51.673685   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:51.673685   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:51.682788   11992 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 23:04:51.859659   11992 request.go:629] Waited for 176.6124ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:51.859659   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m02
	I0513 23:04:51.859659   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:51.859659   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:51.859659   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:51.863659   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:51.864659   11992 pod_ready.go:92] pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:51.864659   11992 pod_ready.go:81] duration metric: took 387.3039ms for pod "kube-scheduler-ha-586300-m02" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:51.864659   11992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:52.060640   11992 request.go:629] Waited for 195.9733ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m03
	I0513 23:04:52.060640   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-586300-m03
	I0513 23:04:52.060640   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.060865   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.060865   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.065098   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:52.264257   11992 request.go:629] Waited for 197.7205ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:52.264328   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes/ha-586300-m03
	I0513 23:04:52.264328   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.264328   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.264328   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.267797   11992 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:04:52.268292   11992 pod_ready.go:92] pod "kube-scheduler-ha-586300-m03" in "kube-system" namespace has status "Ready":"True"
	I0513 23:04:52.268292   11992 pod_ready.go:81] duration metric: took 403.6178ms for pod "kube-scheduler-ha-586300-m03" in "kube-system" namespace to be "Ready" ...
	I0513 23:04:52.268292   11992 pod_ready.go:38] duration metric: took 9.1974978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:04:52.268292   11992 api_server.go:52] waiting for apiserver process to appear ...
	I0513 23:04:52.277758   11992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:04:52.302829   11992 api_server.go:72] duration metric: took 15.70265s to wait for apiserver process to appear ...
	I0513 23:04:52.303351   11992 api_server.go:88] waiting for apiserver healthz status ...
	I0513 23:04:52.303351   11992 api_server.go:253] Checking apiserver healthz at https://172.23.102.229:8443/healthz ...
	I0513 23:04:52.312025   11992 api_server.go:279] https://172.23.102.229:8443/healthz returned 200:
	ok
	I0513 23:04:52.312886   11992 round_trippers.go:463] GET https://172.23.102.229:8443/version
	I0513 23:04:52.312886   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.312987   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.312987   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.314043   11992 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 23:04:52.314557   11992 api_server.go:141] control plane version: v1.30.0
	I0513 23:04:52.314557   11992 api_server.go:131] duration metric: took 11.2056ms to wait for apiserver health ...
	I0513 23:04:52.314557   11992 system_pods.go:43] waiting for kube-system pods to appear ...
	I0513 23:04:52.467102   11992 request.go:629] Waited for 152.4249ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:04:52.467102   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:04:52.467102   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.467429   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.467429   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.480662   11992 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0513 23:04:52.490874   11992 system_pods.go:59] 24 kube-system pods found
	I0513 23:04:52.490874   11992 system_pods.go:61] "coredns-7db6d8ff4d-4qbhd" [6fa6abce-1f7c-4119-b74c-e4e2275f77f4] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "coredns-7db6d8ff4d-wj8z7" [21d8cc35-f37a-42b6-9e44-dfce810d1d51] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "etcd-ha-586300" [a1809532-311c-4f80-9236-fec7256f7b3c] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "etcd-ha-586300-m02" [37b3bba9-35b3-4723-b954-94c4f45c9b96] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "etcd-ha-586300-m03" [1a637fcc-ab57-4fc2-be72-e925e46d8670] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "kindnet-59dc5" [c42f08e1-6016-4dc6-bf46-69571ccfabe8] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "kindnet-8hh55" [4fb9a98f-06d4-4333-89dc-b90c8b880f92] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "kindnet-vddtk" [bf6e57db-8270-4024-ba93-abce11d81513] Running
	I0513 23:04:52.490939   11992 system_pods.go:61] "kube-apiserver-ha-586300" [d6659d47-ce69-4334-a35c-7b66898b49de] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-apiserver-ha-586300-m02" [0b8839d5-3133-4d52-9264-9d998bc54617] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-apiserver-ha-586300-m03" [3c06b188-7d2a-4252-b636-54695945e26b] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-controller-manager-ha-586300" [3416887d-320b-4417-b6ba-ffabb7b84885] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-controller-manager-ha-586300-m02" [eccf51fc-16b7-4d89-95ab-59ec4e8fbc8c] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-controller-manager-ha-586300-m03" [5e5e1656-8c0a-403c-b8cb-34dc58314947] Running
	I0513 23:04:52.491012   11992 system_pods.go:61] "kube-proxy-2tqlw" [6a4bf957-b55f-463f-aa7f-f2aa15b0f6fe] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-proxy-6mpjv" [0cd7eb37-2ff4-487e-b5e6-9d71c69a4814] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-proxy-77zxb" [bc2480b2-3de0-49c4-b84e-8ae7e85829a1] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-scheduler-ha-586300" [8bb322de-7dd8-4780-ae04-9d18a293aa0b] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-scheduler-ha-586300-m02" [c3bb6486-257a-4993-9127-34dada81473a] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-scheduler-ha-586300-m03" [7146ded0-67a1-42b0-898a-d603a3deb02f] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-vip-ha-586300" [5dfa662f-0df1-485a-a52b-fdcd87e23145] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-vip-ha-586300-m02" [4372ac88-49f7-4dcd-9c13-1b8484817d28] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "kube-vip-ha-586300-m03" [7e267e8b-72f0-4f53-acf2-096f2535e1fe] Running
	I0513 23:04:52.491069   11992 system_pods.go:61] "storage-provisioner" [fc11360c-19a1-4d0b-966e-49946c8b0d47] Running
	I0513 23:04:52.491133   11992 system_pods.go:74] duration metric: took 176.5689ms to wait for pod list to return data ...
	I0513 23:04:52.491133   11992 default_sa.go:34] waiting for default service account to be created ...
	I0513 23:04:52.671249   11992 request.go:629] Waited for 180.1086ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/default/serviceaccounts
	I0513 23:04:52.671249   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/default/serviceaccounts
	I0513 23:04:52.671249   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.671249   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.671249   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.675408   11992 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:04:52.675408   11992 default_sa.go:45] found service account: "default"
	I0513 23:04:52.675408   11992 default_sa.go:55] duration metric: took 184.2673ms for default service account to be created ...
	I0513 23:04:52.675408   11992 system_pods.go:116] waiting for k8s-apps to be running ...
	I0513 23:04:52.872985   11992 request.go:629] Waited for 197.5698ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:04:52.873190   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/namespaces/kube-system/pods
	I0513 23:04:52.873190   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:52.873190   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:52.873190   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:52.885965   11992 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0513 23:04:52.896305   11992 system_pods.go:86] 24 kube-system pods found
	I0513 23:04:52.896305   11992 system_pods.go:89] "coredns-7db6d8ff4d-4qbhd" [6fa6abce-1f7c-4119-b74c-e4e2275f77f4] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "coredns-7db6d8ff4d-wj8z7" [21d8cc35-f37a-42b6-9e44-dfce810d1d51] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "etcd-ha-586300" [a1809532-311c-4f80-9236-fec7256f7b3c] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "etcd-ha-586300-m02" [37b3bba9-35b3-4723-b954-94c4f45c9b96] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "etcd-ha-586300-m03" [1a637fcc-ab57-4fc2-be72-e925e46d8670] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kindnet-59dc5" [c42f08e1-6016-4dc6-bf46-69571ccfabe8] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kindnet-8hh55" [4fb9a98f-06d4-4333-89dc-b90c8b880f92] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kindnet-vddtk" [bf6e57db-8270-4024-ba93-abce11d81513] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-apiserver-ha-586300" [d6659d47-ce69-4334-a35c-7b66898b49de] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-apiserver-ha-586300-m02" [0b8839d5-3133-4d52-9264-9d998bc54617] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-apiserver-ha-586300-m03" [3c06b188-7d2a-4252-b636-54695945e26b] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-controller-manager-ha-586300" [3416887d-320b-4417-b6ba-ffabb7b84885] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-controller-manager-ha-586300-m02" [eccf51fc-16b7-4d89-95ab-59ec4e8fbc8c] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-controller-manager-ha-586300-m03" [5e5e1656-8c0a-403c-b8cb-34dc58314947] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-proxy-2tqlw" [6a4bf957-b55f-463f-aa7f-f2aa15b0f6fe] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-proxy-6mpjv" [0cd7eb37-2ff4-487e-b5e6-9d71c69a4814] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-proxy-77zxb" [bc2480b2-3de0-49c4-b84e-8ae7e85829a1] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-scheduler-ha-586300" [8bb322de-7dd8-4780-ae04-9d18a293aa0b] Running
	I0513 23:04:52.896305   11992 system_pods.go:89] "kube-scheduler-ha-586300-m02" [c3bb6486-257a-4993-9127-34dada81473a] Running
	I0513 23:04:52.896861   11992 system_pods.go:89] "kube-scheduler-ha-586300-m03" [7146ded0-67a1-42b0-898a-d603a3deb02f] Running
	I0513 23:04:52.896861   11992 system_pods.go:89] "kube-vip-ha-586300" [5dfa662f-0df1-485a-a52b-fdcd87e23145] Running
	I0513 23:04:52.896861   11992 system_pods.go:89] "kube-vip-ha-586300-m02" [4372ac88-49f7-4dcd-9c13-1b8484817d28] Running
	I0513 23:04:52.896861   11992 system_pods.go:89] "kube-vip-ha-586300-m03" [7e267e8b-72f0-4f53-acf2-096f2535e1fe] Running
	I0513 23:04:52.896861   11992 system_pods.go:89] "storage-provisioner" [fc11360c-19a1-4d0b-966e-49946c8b0d47] Running
	I0513 23:04:52.896861   11992 system_pods.go:126] duration metric: took 221.4447ms to wait for k8s-apps to be running ...
	I0513 23:04:52.896861   11992 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 23:04:52.905761   11992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:04:52.932075   11992 system_svc.go:56] duration metric: took 35.213ms WaitForService to wait for kubelet
	I0513 23:04:52.932169   11992 kubeadm.go:576] duration metric: took 16.3319651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:04:52.932232   11992 node_conditions.go:102] verifying NodePressure condition ...
	I0513 23:04:53.061882   11992 request.go:629] Waited for 129.6449ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.229:8443/api/v1/nodes
	I0513 23:04:53.062162   11992 round_trippers.go:463] GET https://172.23.102.229:8443/api/v1/nodes
	I0513 23:04:53.062162   11992 round_trippers.go:469] Request Headers:
	I0513 23:04:53.062220   11992 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:04:53.062220   11992 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:04:53.067555   11992 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:04:53.069829   11992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:04:53.069904   11992 node_conditions.go:123] node cpu capacity is 2
	I0513 23:04:53.069904   11992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:04:53.069988   11992 node_conditions.go:123] node cpu capacity is 2
	I0513 23:04:53.069988   11992 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:04:53.069988   11992 node_conditions.go:123] node cpu capacity is 2
	I0513 23:04:53.069988   11992 node_conditions.go:105] duration metric: took 137.7513ms to run NodePressure ...
	I0513 23:04:53.070057   11992 start.go:240] waiting for startup goroutines ...
	I0513 23:04:53.070107   11992 start.go:254] writing updated cluster config ...
	I0513 23:04:53.079632   11992 ssh_runner.go:195] Run: rm -f paused
	I0513 23:04:53.197275   11992 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0513 23:04:53.200376   11992 out.go:177] * Done! kubectl is now configured to use "ha-586300" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.704452603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.704471404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.704637711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.790142826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.793187551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.793277155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 22:57:55 ha-586300 dockerd[1332]: time="2024-05-13T22:57:55.793459463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:05:28 ha-586300 dockerd[1332]: time="2024-05-13T23:05:28.652843516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 23:05:28 ha-586300 dockerd[1332]: time="2024-05-13T23:05:28.652951620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 23:05:28 ha-586300 dockerd[1332]: time="2024-05-13T23:05:28.653003222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:05:28 ha-586300 dockerd[1332]: time="2024-05-13T23:05:28.653840451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:05:28 ha-586300 cri-dockerd[1228]: time="2024-05-13T23:05:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/29c75c86289830befef480ac259a062919c9f686f010616e6d34666d63b01a71/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 13 23:05:30 ha-586300 cri-dockerd[1228]: time="2024-05-13T23:05:30Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 13 23:05:30 ha-586300 dockerd[1332]: time="2024-05-13T23:05:30.295391717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 23:05:30 ha-586300 dockerd[1332]: time="2024-05-13T23:05:30.295463222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 23:05:30 ha-586300 dockerd[1332]: time="2024-05-13T23:05:30.295480423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:05:30 ha-586300 dockerd[1332]: time="2024-05-13T23:05:30.295587130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:06:29 ha-586300 dockerd[1326]: 2024/05/13 23:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:06:29 ha-586300 dockerd[1326]: 2024/05/13 23:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:06:29 ha-586300 dockerd[1326]: 2024/05/13 23:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:06:29 ha-586300 dockerd[1326]: 2024/05/13 23:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:06:29 ha-586300 dockerd[1326]: 2024/05/13 23:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:06:29 ha-586300 dockerd[1326]: 2024/05/13 23:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:06:29 ha-586300 dockerd[1326]: 2024/05/13 23:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 13 23:06:29 ha-586300 dockerd[1326]: 2024/05/13 23:06:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	82b9cb93f81cb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   29c75c8628983       busybox-fc5497c4f-v5w28
	3cca1819e1453       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   660d74b20ca07       coredns-7db6d8ff4d-wj8z7
	0dd2364808abe       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   60e4c610c1f0e       coredns-7db6d8ff4d-4qbhd
	a1cd86153923c       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   1dc60ff7d7247       storage-provisioner
	2a50dd327cee4       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Running             kindnet-cni               0                   3772bac758f7f       kindnet-8hh55
	76729111ccec0       a0bf559e280cf                                                                                         27 minutes ago      Running             kube-proxy                0                   865c3491222f4       kube-proxy-77zxb
	d7f2345199207       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     28 minutes ago      Running             kube-vip                  0                   b91c3c1e3ee58       kube-vip-ha-586300
	f4e45fa6a7ff1       259c8277fcbbc                                                                                         28 minutes ago      Running             kube-scheduler            0                   29cd2491a9da3       kube-scheduler-ha-586300
	5aa59ec7b3e08       c7aad43836fa5                                                                                         28 minutes ago      Running             kube-controller-manager   0                   fee036179772b       kube-controller-manager-ha-586300
	54d5259eb4fda       c42f13656d0b2                                                                                         28 minutes ago      Running             kube-apiserver            0                   1d8f3d2c1281e       kube-apiserver-ha-586300
	6f280a956ea0d       3861cfcd7c04c                                                                                         28 minutes ago      Running             etcd                      0                   97eb70a28a452       etcd-ha-586300
	
	
	==> coredns [0dd2364808ab] <==
	[INFO] 10.244.1.2:43544 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.031890751s
	[INFO] 10.244.2.2:36236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229916s
	[INFO] 10.244.2.2:44456 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.136841212s
	[INFO] 10.244.0.4:47020 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110607s
	[INFO] 10.244.0.4:55740 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.089271972s
	[INFO] 10.244.1.2:39460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093206s
	[INFO] 10.244.1.2:33929 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192913s
	[INFO] 10.244.1.2:55027 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.028391608s
	[INFO] 10.244.1.2:42290 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115608s
	[INFO] 10.244.1.2:60562 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105007s
	[INFO] 10.244.2.2:42343 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00029182s
	[INFO] 10.244.2.2:36425 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023484878s
	[INFO] 10.244.2.2:41351 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000345323s
	[INFO] 10.244.2.2:47550 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157811s
	[INFO] 10.244.0.4:44658 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000242316s
	[INFO] 10.244.0.4:45569 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159411s
	[INFO] 10.244.0.4:45724 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059004s
	[INFO] 10.244.0.4:48470 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014951s
	[INFO] 10.244.1.2:59764 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015371s
	[INFO] 10.244.2.2:49551 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184513s
	[INFO] 10.244.0.4:37570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114208s
	[INFO] 10.244.0.4:46088 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076105s
	[INFO] 10.244.1.2:34919 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129208s
	[INFO] 10.244.1.2:33254 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115208s
	[INFO] 10.244.2.2:54967 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140109s
	
	
	==> coredns [3cca1819e145] <==
	[INFO] 10.244.2.2:55407 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088406s
	[INFO] 10.244.2.2:45280 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123108s
	[INFO] 10.244.2.2:43201 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015161s
	[INFO] 10.244.2.2:56254 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051904s
	[INFO] 10.244.0.4:49781 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103507s
	[INFO] 10.244.0.4:37159 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000161311s
	[INFO] 10.244.0.4:42140 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012681351s
	[INFO] 10.244.0.4:36016 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141609s
	[INFO] 10.244.1.2:47054 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190913s
	[INFO] 10.244.1.2:33317 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155411s
	[INFO] 10.244.1.2:38499 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014551s
	[INFO] 10.244.2.2:42977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093907s
	[INFO] 10.244.2.2:40377 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127008s
	[INFO] 10.244.2.2:51922 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056604s
	[INFO] 10.244.0.4:41218 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015181s
	[INFO] 10.244.0.4:47098 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000131609s
	[INFO] 10.244.1.2:51316 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122708s
	[INFO] 10.244.1.2:54718 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000117108s
	[INFO] 10.244.2.2:53578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089606s
	[INFO] 10.244.2.2:55549 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103207s
	[INFO] 10.244.2.2:53562 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000052303s
	[INFO] 10.244.0.4:60896 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205013s
	[INFO] 10.244.0.4:34122 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000115108s
	[INFO] 10.244.0.4:48727 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146909s
	[INFO] 10.244.0.4:47037 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000052804s
	
	
	==> describe nodes <==
	Name:               ha-586300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-586300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=ha-586300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_13T22_57_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 22:57:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-586300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 23:25:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 23:21:19 +0000   Mon, 13 May 2024 22:57:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 23:21:19 +0000   Mon, 13 May 2024 22:57:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 23:21:19 +0000   Mon, 13 May 2024 22:57:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 23:21:19 +0000   Mon, 13 May 2024 22:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.102.229
	  Hostname:    ha-586300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 766fa47b08fc4cd186a4572970ac1cb6
	  System UUID:                cdb7f6e8-e965-6c40-80b5-9bdc5dedc2be
	  Boot ID:                    3912f1b6-ba39-4062-bb61-a816e1502cb2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v5w28              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-4qbhd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-wj8z7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-586300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-8hh55                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-586300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-586300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-77zxb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-586300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-586300                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-586300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-586300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-586300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m   node-controller  Node ha-586300 event: Registered Node ha-586300 in Controller
	  Normal  NodeReady                27m   kubelet          Node ha-586300 status is now: NodeReady
	  Normal  RegisteredNode           24m   node-controller  Node ha-586300 event: Registered Node ha-586300 in Controller
	  Normal  RegisteredNode           20m   node-controller  Node ha-586300 event: Registered Node ha-586300 in Controller
	
	
	Name:               ha-586300-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-586300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=ha-586300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_13T23_01_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 23:00:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-586300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 23:25:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 23:23:32 +0000   Mon, 13 May 2024 23:23:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 23:23:32 +0000   Mon, 13 May 2024 23:23:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 23:23:32 +0000   Mon, 13 May 2024 23:23:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 23:23:32 +0000   Mon, 13 May 2024 23:23:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.108.85
	  Hostname:    ha-586300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 10b9487f64964b3780e37561b054805f
	  System UUID:                805a87ba-4250-134c-ae99-e6f53ab0643b
	  Boot ID:                    2ef337cd-ad5c-411c-96e4-816807097bf7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hd72c                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 etcd-ha-586300-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-vddtk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-586300-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-586300-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-6mpjv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-586300-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-586300-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 24m                  kube-proxy       
	  Normal   Starting                 2m3s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  24m (x8 over 24m)    kubelet          Node ha-586300-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24m (x8 over 24m)    kubelet          Node ha-586300-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24m (x7 over 24m)    kubelet          Node ha-586300-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  24m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           24m                  node-controller  Node ha-586300-m02 event: Registered Node ha-586300-m02 in Controller
	  Normal   RegisteredNode           24m                  node-controller  Node ha-586300-m02 event: Registered Node ha-586300-m02 in Controller
	  Normal   RegisteredNode           20m                  node-controller  Node ha-586300-m02 event: Registered Node ha-586300-m02 in Controller
	  Normal   NodeNotReady             4m43s                node-controller  Node ha-586300-m02 status is now: NodeNotReady
	  Normal   Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m7s (x2 over 2m7s)  kubelet          Node ha-586300-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s (x2 over 2m7s)  kubelet          Node ha-586300-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s (x2 over 2m7s)  kubelet          Node ha-586300-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m7s                 kubelet          Node ha-586300-m02 has been rebooted, boot id: 2ef337cd-ad5c-411c-96e4-816807097bf7
	  Normal   NodeReady                2m7s                 kubelet          Node ha-586300-m02 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	
	
	Name:               ha-586300-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-586300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=ha-586300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_13T23_04_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 23:04:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-586300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 23:25:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 23:21:19 +0000   Mon, 13 May 2024 23:04:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 23:21:19 +0000   Mon, 13 May 2024 23:04:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 23:21:19 +0000   Mon, 13 May 2024 23:04:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 23:21:19 +0000   Mon, 13 May 2024 23:04:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.109.129
	  Hostname:    ha-586300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3af176932f04986b129edbdfe6ef66e
	  System UUID:                0ab3db21-b362-594f-971e-39a38f19c4b7
	  Boot ID:                    49acf628-75bd-4969-9049-a08500a01e57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-njj9r                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 etcd-ha-586300-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-59dc5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-ha-586300-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-ha-586300-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-2tqlw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-ha-586300-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-vip-ha-586300-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node ha-586300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node ha-586300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node ha-586300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node ha-586300-m03 event: Registered Node ha-586300-m03 in Controller
	  Normal  RegisteredNode           21m                node-controller  Node ha-586300-m03 event: Registered Node ha-586300-m03 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-586300-m03 event: Registered Node ha-586300-m03 in Controller
	
	
	Name:               ha-586300-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-586300-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=ha-586300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_13T23_09_16_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 23:09:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-586300-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 May 2024 23:25:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 May 2024 23:25:06 +0000   Mon, 13 May 2024 23:09:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 May 2024 23:25:06 +0000   Mon, 13 May 2024 23:09:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 May 2024 23:25:06 +0000   Mon, 13 May 2024 23:09:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 May 2024 23:25:06 +0000   Mon, 13 May 2024 23:09:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.110.77
	  Hostname:    ha-586300-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c0db624613940e0a7544587bb7ea9fd
	  System UUID:                68b64a31-15d4-c149-8e3e-e7aa1c55ee36
	  Boot ID:                    41d9bc77-8ac4-43aa-9313-240c9d58d5a5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jzmns       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-proxy-2q4bv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node ha-586300-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node ha-586300-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node ha-586300-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-586300-m04 event: Registered Node ha-586300-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-586300-m04 event: Registered Node ha-586300-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-586300-m04 event: Registered Node ha-586300-m04 in Controller
	  Normal  NodeReady                16m                kubelet          Node ha-586300-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.077887] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May13 22:56] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.165597] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +27.686009] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.076728] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.483758] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.175285] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.200234] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.728981] systemd-fstab-generator[1181]: Ignoring "noauto" option for root device
	[  +0.167405] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.168870] systemd-fstab-generator[1205]: Ignoring "noauto" option for root device
	[  +0.249018] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[May13 22:57] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.089588] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.722623] systemd-fstab-generator[1524]: Ignoring "noauto" option for root device
	[  +5.461929] systemd-fstab-generator[1714]: Ignoring "noauto" option for root device
	[  +0.098349] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.516983] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[  +0.118307] kauditd_printk_skb: 72 callbacks suppressed
	[ +14.758608] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.266877] kauditd_printk_skb: 29 callbacks suppressed
	[May13 23:00] hrtimer: interrupt took 2839798 ns
	[May13 23:01] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [6f280a956ea0] <==
	{"level":"warn","ts":"2024-05-13T23:25:39.547506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.557276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.561608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.574347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.583482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.591876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.596891Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.600594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.61014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.618257Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.627783Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.632547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.636902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.645454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.649607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.657178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.664632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.669769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.673863Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.68076Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.687753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.697084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.712431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.715183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-13T23:25:39.745735Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e433e3e9aac3d2bb","from":"e433e3e9aac3d2bb","remote-peer-id":"81e76dc494655f61","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:25:39 up 30 min,  0 users,  load average: 0.35, 0.58, 0.46
	Linux ha-586300 5.10.207 #1 SMP Thu May 9 02:07:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2a50dd327cee] <==
	I0513 23:25:02.365230       1 main.go:250] Node ha-586300-m04 has CIDR [10.244.3.0/24] 
	I0513 23:25:12.381267       1 main.go:223] Handling node with IPs: map[172.23.102.229:{}]
	I0513 23:25:12.381367       1 main.go:227] handling current node
	I0513 23:25:12.381381       1 main.go:223] Handling node with IPs: map[172.23.108.85:{}]
	I0513 23:25:12.381391       1 main.go:250] Node ha-586300-m02 has CIDR [10.244.1.0/24] 
	I0513 23:25:12.381513       1 main.go:223] Handling node with IPs: map[172.23.109.129:{}]
	I0513 23:25:12.381704       1 main.go:250] Node ha-586300-m03 has CIDR [10.244.2.0/24] 
	I0513 23:25:12.382076       1 main.go:223] Handling node with IPs: map[172.23.110.77:{}]
	I0513 23:25:12.382106       1 main.go:250] Node ha-586300-m04 has CIDR [10.244.3.0/24] 
	I0513 23:25:22.390274       1 main.go:223] Handling node with IPs: map[172.23.102.229:{}]
	I0513 23:25:22.390385       1 main.go:227] handling current node
	I0513 23:25:22.390399       1 main.go:223] Handling node with IPs: map[172.23.108.85:{}]
	I0513 23:25:22.390407       1 main.go:250] Node ha-586300-m02 has CIDR [10.244.1.0/24] 
	I0513 23:25:22.390849       1 main.go:223] Handling node with IPs: map[172.23.109.129:{}]
	I0513 23:25:22.391019       1 main.go:250] Node ha-586300-m03 has CIDR [10.244.2.0/24] 
	I0513 23:25:22.395178       1 main.go:223] Handling node with IPs: map[172.23.110.77:{}]
	I0513 23:25:22.395281       1 main.go:250] Node ha-586300-m04 has CIDR [10.244.3.0/24] 
	I0513 23:25:32.404618       1 main.go:223] Handling node with IPs: map[172.23.102.229:{}]
	I0513 23:25:32.404882       1 main.go:227] handling current node
	I0513 23:25:32.404965       1 main.go:223] Handling node with IPs: map[172.23.108.85:{}]
	I0513 23:25:32.404991       1 main.go:250] Node ha-586300-m02 has CIDR [10.244.1.0/24] 
	I0513 23:25:32.405130       1 main.go:223] Handling node with IPs: map[172.23.109.129:{}]
	I0513 23:25:32.405239       1 main.go:250] Node ha-586300-m03 has CIDR [10.244.2.0/24] 
	I0513 23:25:32.405309       1 main.go:223] Handling node with IPs: map[172.23.110.77:{}]
	I0513 23:25:32.405345       1 main.go:250] Node ha-586300-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [54d5259eb4fd] <==
	Trace[1504576256]: [515.098696ms] [515.098696ms] END
	E0513 23:05:33.827700       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51081: use of closed network connection
	E0513 23:05:34.383158       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51084: use of closed network connection
	E0513 23:05:35.902101       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51086: use of closed network connection
	E0513 23:05:36.390860       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51088: use of closed network connection
	E0513 23:05:36.861383       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51090: use of closed network connection
	E0513 23:05:37.285810       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51092: use of closed network connection
	E0513 23:05:37.706353       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51094: use of closed network connection
	E0513 23:05:38.133855       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51096: use of closed network connection
	E0513 23:05:38.538630       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51098: use of closed network connection
	E0513 23:05:39.310122       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51101: use of closed network connection
	E0513 23:05:49.710067       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51103: use of closed network connection
	E0513 23:05:50.119886       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51107: use of closed network connection
	E0513 23:06:00.528565       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51109: use of closed network connection
	E0513 23:06:00.961250       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51111: use of closed network connection
	E0513 23:06:11.386582       1 conn.go:339] Error on socket receive: read tcp 172.23.111.254:8443->172.23.96.1:51113: use of closed network connection
	W0513 23:20:38.039941       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.229 172.23.109.129]
	I0513 23:20:41.539234       1 trace.go:236] Trace[682617566]: "Update" accept:application/json, */*,audit-id:bef085b2-e7b2-472b-9938-b636b2673678,client:172.23.102.229,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (13-May-2024 23:20:41.005) (total time: 531ms):
	Trace[682617566]: ["GuaranteedUpdate etcd3" audit-id:bef085b2-e7b2-472b-9938-b636b2673678,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 531ms (23:20:41.005)
	Trace[682617566]:  ---"Txn call completed" 530ms (23:20:41.537)]
	Trace[682617566]: [531.835594ms] [531.835594ms] END
	I0513 23:20:48.255649       1 trace.go:236] Trace[1133455440]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.23.102.229,type:*v1.Endpoints,resource:apiServerIPInfo (13-May-2024 23:20:47.628) (total time: 627ms):
	Trace[1133455440]: ---"Transaction prepared" 311ms (23:20:47.943)
	Trace[1133455440]: ---"Txn call completed" 312ms (23:20:48.255)
	Trace[1133455440]: [627.33112ms] [627.33112ms] END
	
	
	==> kube-controller-manager [5aa59ec7b3e0] <==
	I0513 23:05:27.518491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.103µs"
	I0513 23:05:27.604260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.059604ms"
	I0513 23:05:27.604690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.502µs"
	I0513 23:05:27.689481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.303µs"
	I0513 23:05:28.499804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.302µs"
	I0513 23:05:28.789448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.903µs"
	I0513 23:05:29.919071       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.94439ms"
	I0513 23:05:29.919151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.103µs"
	I0513 23:05:30.537352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.510093ms"
	I0513 23:05:30.537999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.806µs"
	I0513 23:05:30.965455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.803635ms"
	I0513 23:05:30.965698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.504µs"
	E0513 23:09:16.546192       1 certificate_controller.go:146] Sync csr-zfkj2 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-zfkj2": the object has been modified; please apply your changes to the latest version and try again
	E0513 23:09:16.574787       1 certificate_controller.go:146] Sync csr-zfkj2 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-zfkj2": the object has been modified; please apply your changes to the latest version and try again
	I0513 23:09:16.637796       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-586300-m04\" does not exist"
	I0513 23:09:16.702773       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-586300-m04" podCIDRs=["10.244.3.0/24"]
	I0513 23:09:21.652142       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-586300-m04"
	I0513 23:09:36.280871       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-586300-m04"
	I0513 23:20:56.836287       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-586300-m04"
	I0513 23:20:57.091292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.176284ms"
	I0513 23:20:57.091437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.702µs"
	I0513 23:23:32.219592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-586300-m04"
	I0513 23:23:33.246028       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.402µs"
	I0513 23:23:37.078255       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.437077ms"
	I0513 23:23:37.079005       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.604µs"
	
	
	==> kube-proxy [76729111ccec] <==
	I0513 22:57:43.581221       1 server_linux.go:69] "Using iptables proxy"
	I0513 22:57:43.609494       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.229"]
	I0513 22:57:43.668028       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0513 22:57:43.668180       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0513 22:57:43.668241       1 server_linux.go:165] "Using iptables Proxier"
	I0513 22:57:43.672519       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 22:57:43.672891       1 server.go:872] "Version info" version="v1.30.0"
	I0513 22:57:43.673286       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 22:57:43.677378       1 config.go:192] "Starting service config controller"
	I0513 22:57:43.678115       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 22:57:43.678268       1 config.go:101] "Starting endpoint slice config controller"
	I0513 22:57:43.678472       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 22:57:43.680090       1 config.go:319] "Starting node config controller"
	I0513 22:57:43.684681       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 22:57:43.779333       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0513 22:57:43.779388       1 shared_informer.go:320] Caches are synced for service config
	I0513 22:57:43.789057       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f4e45fa6a7ff] <==
	W0513 22:57:26.091329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0513 22:57:26.091613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0513 22:57:26.175961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0513 22:57:26.176285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0513 22:57:26.360733       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0513 22:57:26.360764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0513 22:57:26.360805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0513 22:57:26.360819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0513 22:57:26.434970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0513 22:57:26.435739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0513 22:57:26.455981       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 22:57:26.456169       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 22:57:26.507102       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0513 22:57:26.507320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0513 22:57:26.570578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0513 22:57:26.570736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0513 22:57:26.637206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 22:57:26.637250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0513 22:57:26.682315       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 22:57:26.682358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0513 22:57:26.689206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0513 22:57:26.689296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0513 22:57:26.812562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 22:57:26.812868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0513 22:57:28.939739       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 13 23:21:28 ha-586300 kubelet[2217]: E0513 23:21:28.546715    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:21:28 ha-586300 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:21:28 ha-586300 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:21:28 ha-586300 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:21:28 ha-586300 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 23:22:28 ha-586300 kubelet[2217]: E0513 23:22:28.546261    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:22:28 ha-586300 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:22:28 ha-586300 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:22:28 ha-586300 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:22:28 ha-586300 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 23:23:28 ha-586300 kubelet[2217]: E0513 23:23:28.545207    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:23:28 ha-586300 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:23:28 ha-586300 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:23:28 ha-586300 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:23:28 ha-586300 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 23:24:28 ha-586300 kubelet[2217]: E0513 23:24:28.545347    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:24:28 ha-586300 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:24:28 ha-586300 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:24:28 ha-586300 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:24:28 ha-586300 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 23:25:28 ha-586300 kubelet[2217]: E0513 23:25:28.546189    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:25:28 ha-586300 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:25:28 ha-586300 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:25:28 ha-586300 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:25:28 ha-586300 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:25:32.211187    4560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-586300 -n ha-586300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-586300 -n ha-586300: (10.6392531s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-586300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (259.47s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (52.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-q7442 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-q7442 -- sh -c "ping -c 1 172.23.96.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-q7442 -- sh -c "ping -c 1 172.23.96.1": exit status 1 (10.3891499s)

                                                
                                                
-- stdout --
	PING 172.23.96.1 (172.23.96.1): 56 data bytes
	
	--- 172.23.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:59:53.972536   10052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.23.96.1) from pod (busybox-fc5497c4f-q7442): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-xqj6w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-xqj6w -- sh -c "ping -c 1 172.23.96.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-xqj6w -- sh -c "ping -c 1 172.23.96.1": exit status 1 (10.4179459s)

                                                
                                                
-- stdout --
	PING 172.23.96.1 (172.23.96.1): 56 data bytes
	
	--- 172.23.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:00:04.795054    3400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.23.96.1) from pod (busybox-fc5497c4f-xqj6w): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-101100 -n multinode-101100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-101100 -n multinode-101100: (10.9571061s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 logs -n 25: (7.6207351s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-505400 ssh -- ls                    | mount-start-2-505400 | minikube5\jenkins | v1.33.1 | 13 May 24 23:49 UTC | 13 May 24 23:50 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-433500                           | mount-start-1-433500 | minikube5\jenkins | v1.33.1 | 13 May 24 23:50 UTC | 13 May 24 23:50 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-505400 ssh -- ls                    | mount-start-2-505400 | minikube5\jenkins | v1.33.1 | 13 May 24 23:50 UTC | 13 May 24 23:50 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-505400                           | mount-start-2-505400 | minikube5\jenkins | v1.33.1 | 13 May 24 23:50 UTC | 13 May 24 23:51 UTC |
	| start   | -p mount-start-2-505400                           | mount-start-2-505400 | minikube5\jenkins | v1.33.1 | 13 May 24 23:51 UTC | 13 May 24 23:52 UTC |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-505400 | minikube5\jenkins | v1.33.1 | 13 May 24 23:52 UTC |                     |
	|         | --profile mount-start-2-505400 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-505400 ssh -- ls                    | mount-start-2-505400 | minikube5\jenkins | v1.33.1 | 13 May 24 23:52 UTC | 13 May 24 23:52 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-505400                           | mount-start-2-505400 | minikube5\jenkins | v1.33.1 | 13 May 24 23:52 UTC | 13 May 24 23:53 UTC |
	| delete  | -p mount-start-1-433500                           | mount-start-1-433500 | minikube5\jenkins | v1.33.1 | 13 May 24 23:53 UTC | 13 May 24 23:53 UTC |
	| start   | -p multinode-101100                               | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:53 UTC | 13 May 24 23:59 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- apply -f                   | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- rollout                    | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- get pods -o                | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- get pods -o                | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- exec                       | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | busybox-fc5497c4f-q7442 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- exec                       | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | busybox-fc5497c4f-xqj6w --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- exec                       | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | busybox-fc5497c4f-q7442 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- exec                       | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | busybox-fc5497c4f-xqj6w --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- exec                       | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | busybox-fc5497c4f-q7442 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- exec                       | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | busybox-fc5497c4f-xqj6w -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- get pods -o                | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- exec                       | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC | 13 May 24 23:59 UTC |
	|         | busybox-fc5497c4f-q7442                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- exec                       | multinode-101100     | minikube5\jenkins | v1.33.1 | 13 May 24 23:59 UTC |                     |
	|         | busybox-fc5497c4f-q7442 -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.96.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- exec                       | multinode-101100     | minikube5\jenkins | v1.33.1 | 14 May 24 00:00 UTC | 14 May 24 00:00 UTC |
	|         | busybox-fc5497c4f-xqj6w                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-101100 -- exec                       | multinode-101100     | minikube5\jenkins | v1.33.1 | 14 May 24 00:00 UTC |                     |
	|         | busybox-fc5497c4f-xqj6w -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.96.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 23:53:23
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 23:53:23.940372    4024 out.go:291] Setting OutFile to fd 748 ...
	I0513 23:53:23.943517    4024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:53:23.943557    4024 out.go:304] Setting ErrFile to fd 772...
	I0513 23:53:23.943557    4024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:53:23.961695    4024 out.go:298] Setting JSON to false
	I0513 23:53:23.964260    4024 start.go:129] hostinfo: {"hostname":"minikube5","uptime":5967,"bootTime":1715638436,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0513 23:53:23.964329    4024 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 23:53:23.969344    4024 out.go:177] * [multinode-101100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 23:53:23.973330    4024 notify.go:220] Checking for updates...
	I0513 23:53:23.973330    4024 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:53:23.974392    4024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 23:53:23.978424    4024 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0513 23:53:23.980879    4024 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 23:53:23.983208    4024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 23:53:23.986700    4024 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:53:23.987079    4024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 23:53:28.515961    4024 out.go:177] * Using the hyperv driver based on user configuration
	I0513 23:53:28.519806    4024 start.go:297] selected driver: hyperv
	I0513 23:53:28.519806    4024 start.go:901] validating driver "hyperv" against <nil>
	I0513 23:53:28.519806    4024 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 23:53:28.558963    4024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 23:53:28.560051    4024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:53:28.560119    4024 cni.go:84] Creating CNI manager for ""
	I0513 23:53:28.560119    4024 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0513 23:53:28.560187    4024 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0513 23:53:28.560284    4024 start.go:340] cluster config:
	{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:53:28.560558    4024 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 23:53:28.564115    4024 out.go:177] * Starting "multinode-101100" primary control-plane node in "multinode-101100" cluster
	I0513 23:53:28.567994    4024 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:53:28.568167    4024 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 23:53:28.568202    4024 cache.go:56] Caching tarball of preloaded images
	I0513 23:53:28.568336    4024 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 23:53:28.568336    4024 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 23:53:28.568336    4024 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0513 23:53:28.569040    4024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json: {Name:mk2b7c27c43bde1de6b9bea7c5c106fd1df97df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:53:28.570088    4024 start.go:360] acquireMachinesLock for multinode-101100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 23:53:28.570212    4024 start.go:364] duration metric: took 84.6µs to acquireMachinesLock for "multinode-101100"
	I0513 23:53:28.570277    4024 start.go:93] Provisioning new machine with config: &{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:53:28.570277    4024 start.go:125] createHost starting for "" (driver="hyperv")
	I0513 23:53:28.573591    4024 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 23:53:28.573591    4024 start.go:159] libmachine.API.Create for "multinode-101100" (driver="hyperv")
	I0513 23:53:28.573591    4024 client.go:168] LocalClient.Create starting
	I0513 23:53:28.573591    4024 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0513 23:53:28.574749    4024 main.go:141] libmachine: Decoding PEM data...
	I0513 23:53:28.574839    4024 main.go:141] libmachine: Parsing certificate...
	I0513 23:53:28.575029    4024 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0513 23:53:28.575208    4024 main.go:141] libmachine: Decoding PEM data...
	I0513 23:53:28.575249    4024 main.go:141] libmachine: Parsing certificate...
	I0513 23:53:28.575331    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0513 23:53:30.347447    4024 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0513 23:53:30.347447    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:30.347447    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0513 23:53:31.799016    4024 main.go:141] libmachine: [stdout =====>] : False
	
	I0513 23:53:31.808777    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:31.808777    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 23:53:33.063819    4024 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 23:53:33.063819    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:33.071080    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 23:53:36.155745    4024 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 23:53:36.155745    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:36.167810    4024 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0513 23:53:36.482214    4024 main.go:141] libmachine: Creating SSH key...
	I0513 23:53:36.918004    4024 main.go:141] libmachine: Creating VM...
	I0513 23:53:36.918004    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 23:53:39.456305    4024 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 23:53:39.457929    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:39.457997    4024 main.go:141] libmachine: Using switch "Default Switch"
	I0513 23:53:39.458089    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 23:53:40.981558    4024 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 23:53:40.981558    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:40.981558    4024 main.go:141] libmachine: Creating VHD
	I0513 23:53:40.981558    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0513 23:53:44.348902    4024 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 56F616B3-A660-47FC-A9F6-D256627DCA18
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0513 23:53:44.358835    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:44.358835    4024 main.go:141] libmachine: Writing magic tar header
	I0513 23:53:44.358912    4024 main.go:141] libmachine: Writing SSH key tar header
	I0513 23:53:44.367606    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0513 23:53:47.252587    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:53:47.252587    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:47.252587    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\disk.vhd' -SizeBytes 20000MB
	I0513 23:53:49.510850    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:53:49.510850    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:49.520298    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-101100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0513 23:53:52.683985    4024 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-101100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0513 23:53:52.683985    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:52.693263    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-101100 -DynamicMemoryEnabled $false
	I0513 23:53:54.655092    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:53:54.655092    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:54.655092    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-101100 -Count 2
	I0513 23:53:56.524536    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:53:56.524536    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:56.533323    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-101100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\boot2docker.iso'
	I0513 23:53:58.750981    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:53:58.750981    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:53:58.750981    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-101100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\disk.vhd'
	I0513 23:54:01.012057    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:54:01.012057    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:01.012057    4024 main.go:141] libmachine: Starting VM...
	I0513 23:54:01.022050    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-101100
	I0513 23:54:03.731858    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:54:03.731858    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:03.731858    4024 main.go:141] libmachine: Waiting for host to start...
	I0513 23:54:03.742079    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:05.687477    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:05.687519    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:05.687716    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:07.897737    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:54:07.899677    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:08.912339    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:10.801024    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:10.816527    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:10.816527    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:12.979134    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:54:12.979204    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:13.979885    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:15.887654    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:15.888049    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:15.888135    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:18.060218    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:54:18.060218    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:19.070525    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:20.992554    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:21.002383    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:21.002470    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:23.199076    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:54:23.203572    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:24.218272    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:26.169785    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:26.179509    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:26.179509    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:28.440001    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:54:28.440001    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:28.449392    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:30.257114    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:30.257114    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:30.257114    4024 machine.go:94] provisionDockerMachine start ...
	I0513 23:54:30.257114    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:32.086858    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:32.086858    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:32.095762    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:34.220763    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:54:34.226338    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:34.229879    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:54:34.238142    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.39 22 <nil> <nil>}
	I0513 23:54:34.238142    4024 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 23:54:34.370085    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 23:54:34.370085    4024 buildroot.go:166] provisioning hostname "multinode-101100"
	I0513 23:54:34.370614    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:36.195546    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:36.195546    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:36.195546    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:38.394092    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:54:38.394269    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:38.397646    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:54:38.397690    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.39 22 <nil> <nil>}
	I0513 23:54:38.397690    4024 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-101100 && echo "multinode-101100" | sudo tee /etc/hostname
	I0513 23:54:38.553151    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101100
	
	I0513 23:54:38.553259    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:40.397105    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:40.397105    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:40.406350    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:42.589889    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:54:42.589889    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:42.594103    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:54:42.594629    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.39 22 <nil> <nil>}
	I0513 23:54:42.594629    4024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-101100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-101100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-101100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 23:54:42.739231    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 23:54:42.739357    4024 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 23:54:42.739357    4024 buildroot.go:174] setting up certificates
	I0513 23:54:42.739357    4024 provision.go:84] configureAuth start
	I0513 23:54:42.739357    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:44.564551    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:44.575486    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:44.575486    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:46.751155    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:54:46.751155    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:46.760887    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:48.609485    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:48.609485    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:48.617944    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:50.789162    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:54:50.789162    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:50.789162    4024 provision.go:143] copyHostCerts
	I0513 23:54:50.791718    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0513 23:54:50.791856    4024 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0513 23:54:50.791856    4024 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0513 23:54:50.791856    4024 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 23:54:50.793039    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0513 23:54:50.793148    4024 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0513 23:54:50.793148    4024 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0513 23:54:50.793148    4024 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 23:54:50.794307    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0513 23:54:50.794361    4024 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0513 23:54:50.794361    4024 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0513 23:54:50.794361    4024 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 23:54:50.795045    4024 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-101100 san=[127.0.0.1 172.23.106.39 localhost minikube multinode-101100]
	I0513 23:54:50.844209    4024 provision.go:177] copyRemoteCerts
	I0513 23:54:50.854857    4024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 23:54:50.854857    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:52.668030    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:52.668030    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:52.677319    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:54.861306    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:54:54.861306    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:54.871396    4024 sshutil.go:53] new ssh client: &{IP:172.23.106.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0513 23:54:54.974118    4024 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1190276s)
	I0513 23:54:54.974118    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 23:54:54.974118    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 23:54:55.015941    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 23:54:55.016303    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 23:54:55.056507    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 23:54:55.056795    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0513 23:54:55.100723    4024 provision.go:87] duration metric: took 12.3606659s to configureAuth
	I0513 23:54:55.100723    4024 buildroot.go:189] setting minikube options for container-runtime
	I0513 23:54:55.102276    4024 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:54:55.102276    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:54:56.903449    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:54:56.903449    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:56.912369    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:54:59.117101    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:54:59.117101    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:54:59.129835    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:54:59.130358    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.39 22 <nil> <nil>}
	I0513 23:54:59.130358    4024 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 23:54:59.262401    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 23:54:59.262475    4024 buildroot.go:70] root file system type: tmpfs
	I0513 23:54:59.262475    4024 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 23:54:59.262475    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:55:01.095723    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:55:01.095723    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:01.106178    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:55:03.386734    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:55:03.386734    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:03.401940    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:55:03.401940    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.39 22 <nil> <nil>}
	I0513 23:55:03.402464    4024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 23:55:03.564454    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 23:55:03.564454    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:55:05.434028    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:55:05.434028    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:05.443891    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:55:07.650761    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:55:07.660998    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:07.664940    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:55:07.665589    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.39 22 <nil> <nil>}
	I0513 23:55:07.665589    4024 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 23:55:09.674627    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 23:55:09.674660    4024 machine.go:97] duration metric: took 39.4153139s to provisionDockerMachine
	I0513 23:55:09.674660    4024 client.go:171] duration metric: took 1m41.0953143s to LocalClient.Create
	I0513 23:55:09.674660    4024 start.go:167] duration metric: took 1m41.0953143s to libmachine.API.Create "multinode-101100"
	I0513 23:55:09.674660    4024 start.go:293] postStartSetup for "multinode-101100" (driver="hyperv")
	I0513 23:55:09.674660    4024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 23:55:09.684188    4024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 23:55:09.684188    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:55:11.511597    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:55:11.511597    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:11.521281    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:55:13.705645    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:55:13.705645    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:13.705994    4024 sshutil.go:53] new ssh client: &{IP:172.23.106.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0513 23:55:13.812812    4024 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1283914s)
	I0513 23:55:13.822759    4024 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 23:55:13.826009    4024 command_runner.go:130] > NAME=Buildroot
	I0513 23:55:13.826009    4024 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0513 23:55:13.826009    4024 command_runner.go:130] > ID=buildroot
	I0513 23:55:13.826009    4024 command_runner.go:130] > VERSION_ID=2023.02.9
	I0513 23:55:13.826009    4024 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0513 23:55:13.826009    4024 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 23:55:13.826009    4024 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 23:55:13.826009    4024 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 23:55:13.831332    4024 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0513 23:55:13.831875    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0513 23:55:13.843706    4024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 23:55:13.860007    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0513 23:55:13.900187    4024 start.go:296] duration metric: took 4.2252885s for postStartSetup
	I0513 23:55:13.903050    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:55:15.726362    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:55:15.726362    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:15.735824    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:55:18.041673    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:55:18.041673    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:18.042536    4024 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0513 23:55:18.045329    4024 start.go:128] duration metric: took 1m49.4688252s to createHost
	I0513 23:55:18.045384    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:55:19.895540    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:55:19.895540    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:19.906045    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:55:22.137998    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:55:22.147813    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:22.151537    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:55:22.151686    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.39 22 <nil> <nil>}
	I0513 23:55:22.151686    4024 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 23:55:22.284273    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715644522.499742083
	
	I0513 23:55:22.284273    4024 fix.go:216] guest clock: 1715644522.499742083
	I0513 23:55:22.284273    4024 fix.go:229] Guest: 2024-05-13 23:55:22.499742083 +0000 UTC Remote: 2024-05-13 23:55:18.0453847 +0000 UTC m=+114.224889501 (delta=4.454357383s)
	I0513 23:55:22.284273    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:55:24.130975    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:55:24.140318    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:24.140318    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:55:26.357795    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:55:26.367463    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:26.371553    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:55:26.371728    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.39 22 <nil> <nil>}
	I0513 23:55:26.371728    4024 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715644522
	I0513 23:55:26.515268    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 23:55:22 UTC 2024
	
	I0513 23:55:26.515268    4024 fix.go:236] clock set: Mon May 13 23:55:22 UTC 2024
	 (err=<nil>)
	I0513 23:55:26.515268    4024 start.go:83] releasing machines lock for "multinode-101100", held for 1m57.9382878s
	I0513 23:55:26.515875    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:55:28.322343    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:55:28.322343    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:28.322343    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:55:30.541292    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:55:30.541292    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:30.544805    4024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 23:55:30.544805    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:55:30.551658    4024 ssh_runner.go:195] Run: cat /version.json
	I0513 23:55:30.551658    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:55:32.549751    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:55:32.549751    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:32.549751    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:55:32.550279    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:55:32.550279    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:32.550556    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:55:34.911946    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:55:34.911946    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:34.911946    4024 sshutil.go:53] new ssh client: &{IP:172.23.106.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0513 23:55:34.932413    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:55:34.932413    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:55:34.933136    4024 sshutil.go:53] new ssh client: &{IP:172.23.106.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0513 23:55:35.013743    4024 command_runner.go:130] > {"iso_version": "v1.33.1", "kicbase_version": "v0.0.43-1714992375-18804", "minikube_version": "v1.33.1", "commit": "d6e0d89dd5607476c1efbac5f05c928d4cd7ef53"}
	I0513 23:55:35.013916    4024 ssh_runner.go:235] Completed: cat /version.json: (4.4620067s)
	I0513 23:55:35.023257    4024 ssh_runner.go:195] Run: systemctl --version
	I0513 23:55:35.136418    4024 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0513 23:55:35.136524    4024 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.591461s)
	I0513 23:55:35.136686    4024 command_runner.go:130] > systemd 252 (252)
	I0513 23:55:35.136753    4024 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0513 23:55:35.150326    4024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0513 23:55:35.159736    4024 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0513 23:55:35.160535    4024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 23:55:35.169651    4024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 23:55:35.196289    4024 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0513 23:55:35.196455    4024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 23:55:35.196523    4024 start.go:494] detecting cgroup driver to use...
	I0513 23:55:35.196861    4024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:55:35.227663    4024 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0513 23:55:35.237296    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 23:55:35.263160    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 23:55:35.281262    4024 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 23:55:35.294639    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 23:55:35.319562    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:55:35.345690    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 23:55:35.375612    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:55:35.405533    4024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 23:55:35.431019    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 23:55:35.457536    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 23:55:35.485314    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 23:55:35.512735    4024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 23:55:35.530352    4024 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0513 23:55:35.541916    4024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 23:55:35.567382    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:55:35.753338    4024 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 23:55:35.780560    4024 start.go:494] detecting cgroup driver to use...
	I0513 23:55:35.792731    4024 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 23:55:35.820501    4024 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0513 23:55:35.820542    4024 command_runner.go:130] > [Unit]
	I0513 23:55:35.820588    4024 command_runner.go:130] > Description=Docker Application Container Engine
	I0513 23:55:35.820588    4024 command_runner.go:130] > Documentation=https://docs.docker.com
	I0513 23:55:35.820625    4024 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0513 23:55:35.820625    4024 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0513 23:55:35.820687    4024 command_runner.go:130] > StartLimitBurst=3
	I0513 23:55:35.820687    4024 command_runner.go:130] > StartLimitIntervalSec=60
	I0513 23:55:35.820687    4024 command_runner.go:130] > [Service]
	I0513 23:55:35.820687    4024 command_runner.go:130] > Type=notify
	I0513 23:55:35.820740    4024 command_runner.go:130] > Restart=on-failure
	I0513 23:55:35.820740    4024 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0513 23:55:35.820780    4024 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0513 23:55:35.820780    4024 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0513 23:55:35.820780    4024 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0513 23:55:35.820780    4024 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0513 23:55:35.820780    4024 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0513 23:55:35.820780    4024 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0513 23:55:35.820780    4024 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0513 23:55:35.820780    4024 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0513 23:55:35.820780    4024 command_runner.go:130] > ExecStart=
	I0513 23:55:35.820780    4024 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0513 23:55:35.820780    4024 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0513 23:55:35.820780    4024 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0513 23:55:35.820780    4024 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0513 23:55:35.820780    4024 command_runner.go:130] > LimitNOFILE=infinity
	I0513 23:55:35.820780    4024 command_runner.go:130] > LimitNPROC=infinity
	I0513 23:55:35.820780    4024 command_runner.go:130] > LimitCORE=infinity
	I0513 23:55:35.820780    4024 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0513 23:55:35.820780    4024 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0513 23:55:35.820780    4024 command_runner.go:130] > TasksMax=infinity
	I0513 23:55:35.820780    4024 command_runner.go:130] > TimeoutStartSec=0
	I0513 23:55:35.820780    4024 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0513 23:55:35.820780    4024 command_runner.go:130] > Delegate=yes
	I0513 23:55:35.820780    4024 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0513 23:55:35.820780    4024 command_runner.go:130] > KillMode=process
	I0513 23:55:35.820780    4024 command_runner.go:130] > [Install]
	I0513 23:55:35.820780    4024 command_runner.go:130] > WantedBy=multi-user.target
	I0513 23:55:35.833256    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:55:35.861304    4024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 23:55:35.896377    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:55:35.928965    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:55:35.956229    4024 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 23:55:36.008772    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:55:36.030962    4024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:55:36.060903    4024 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0513 23:55:36.071132    4024 ssh_runner.go:195] Run: which cri-dockerd
	I0513 23:55:36.075503    4024 command_runner.go:130] > /usr/bin/cri-dockerd
	I0513 23:55:36.084021    4024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 23:55:36.098771    4024 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 23:55:36.133728    4024 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 23:55:36.304757    4024 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 23:55:36.470061    4024 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 23:55:36.470061    4024 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 23:55:36.508253    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:55:36.680575    4024 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 23:55:39.157904    4024 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4771903s)
	I0513 23:55:39.170956    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 23:55:39.203886    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:55:39.237886    4024 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 23:55:39.415005    4024 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 23:55:39.595302    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:55:39.768634    4024 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 23:55:39.806192    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:55:39.841318    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:55:40.030733    4024 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 23:55:40.133317    4024 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 23:55:40.142267    4024 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 23:55:40.155821    4024 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0513 23:55:40.156633    4024 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0513 23:55:40.156633    4024 command_runner.go:130] > Device: 0,22	Inode: 894         Links: 1
	I0513 23:55:40.156633    4024 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0513 23:55:40.156633    4024 command_runner.go:130] > Access: 2024-05-13 23:55:40.263988187 +0000
	I0513 23:55:40.156633    4024 command_runner.go:130] > Modify: 2024-05-13 23:55:40.263988187 +0000
	I0513 23:55:40.156633    4024 command_runner.go:130] > Change: 2024-05-13 23:55:40.266988331 +0000
	I0513 23:55:40.156633    4024 command_runner.go:130] >  Birth: -
	I0513 23:55:40.156835    4024 start.go:562] Will wait 60s for crictl version
	I0513 23:55:40.168473    4024 ssh_runner.go:195] Run: which crictl
	I0513 23:55:40.175059    4024 command_runner.go:130] > /usr/bin/crictl
	I0513 23:55:40.187011    4024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 23:55:40.230147    4024 command_runner.go:130] > Version:  0.1.0
	I0513 23:55:40.231160    4024 command_runner.go:130] > RuntimeName:  docker
	I0513 23:55:40.231251    4024 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0513 23:55:40.231382    4024 command_runner.go:130] > RuntimeApiVersion:  v1
	I0513 23:55:40.233701    4024 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 23:55:40.241093    4024 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:55:40.267486    4024 command_runner.go:130] > 26.0.2
	I0513 23:55:40.277176    4024 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:55:40.303516    4024 command_runner.go:130] > 26.0.2
	I0513 23:55:40.307614    4024 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 23:55:40.307614    4024 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 23:55:40.313134    4024 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 23:55:40.313134    4024 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 23:55:40.313134    4024 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 23:55:40.313134    4024 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 23:55:40.315244    4024 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 23:55:40.315244    4024 ip.go:210] interface addr: 172.23.96.1/20
	I0513 23:55:40.322774    4024 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 23:55:40.328479    4024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:55:40.351851    4024 kubeadm.go:877] updating cluster {Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.106.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0513 23:55:40.351928    4024 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:55:40.359845    4024 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 23:55:40.377804    4024 docker.go:685] Got preloaded images: 
	I0513 23:55:40.377935    4024 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0513 23:55:40.387866    4024 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 23:55:40.403122    4024 command_runner.go:139] > {"Repositories":{}}
	I0513 23:55:40.411223    4024 ssh_runner.go:195] Run: which lz4
	I0513 23:55:40.416137    4024 command_runner.go:130] > /usr/bin/lz4
	I0513 23:55:40.416137    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0513 23:55:40.423791    4024 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0513 23:55:40.430339    4024 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0513 23:55:40.430339    4024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0513 23:55:40.430523    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0513 23:55:41.760417    4024 docker.go:649] duration metric: took 1.3434957s to copy over tarball
	I0513 23:55:41.769054    4024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0513 23:55:51.082868    4024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.3132914s)
	I0513 23:55:51.082868    4024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0513 23:55:51.143285    4024 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 23:55:51.158601    4024 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0513 23:55:51.159327    4024 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0513 23:55:51.204172    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:55:51.385324    4024 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 23:55:54.738414    4024 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.352903s)
	I0513 23:55:54.745926    4024 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 23:55:54.766208    4024 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0513 23:55:54.767183    4024 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0513 23:55:54.767183    4024 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0513 23:55:54.767183    4024 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0513 23:55:54.767183    4024 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0513 23:55:54.767226    4024 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0513 23:55:54.767226    4024 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0513 23:55:54.767226    4024 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 23:55:54.767268    4024 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 23:55:54.767268    4024 cache_images.go:84] Images are preloaded, skipping loading
	I0513 23:55:54.767364    4024 kubeadm.go:928] updating node { 172.23.106.39 8443 v1.30.0 docker true true} ...
	I0513 23:55:54.767544    4024 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-101100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.106.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 23:55:54.773591    4024 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0513 23:55:54.808237    4024 command_runner.go:130] > cgroupfs
	I0513 23:55:54.810307    4024 cni.go:84] Creating CNI manager for ""
	I0513 23:55:54.810388    4024 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0513 23:55:54.810433    4024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0513 23:55:54.810530    4024 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.106.39 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-101100 NodeName:multinode-101100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.106.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.106.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0513 23:55:54.810829    4024 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.106.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-101100"
	  kubeletExtraArgs:
	    node-ip: 172.23.106.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.106.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0513 23:55:54.821810    4024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 23:55:54.838890    4024 command_runner.go:130] > kubeadm
	I0513 23:55:54.838890    4024 command_runner.go:130] > kubectl
	I0513 23:55:54.838890    4024 command_runner.go:130] > kubelet
	I0513 23:55:54.838890    4024 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 23:55:54.849086    4024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0513 23:55:54.866688    4024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0513 23:55:54.894076    4024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 23:55:54.920841    4024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0513 23:55:54.958124    4024 ssh_runner.go:195] Run: grep 172.23.106.39	control-plane.minikube.internal$ /etc/hosts
	I0513 23:55:54.964042    4024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.106.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:55:54.994889    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:55:55.156191    4024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:55:55.179271    4024 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100 for IP: 172.23.106.39
	I0513 23:55:55.179271    4024 certs.go:194] generating shared ca certs ...
	I0513 23:55:55.179271    4024 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:55:55.180366    4024 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 23:55:55.180759    4024 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 23:55:55.180868    4024 certs.go:256] generating profile certs ...
	I0513 23:55:55.180868    4024 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\client.key
	I0513 23:55:55.181588    4024 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\client.crt with IP's: []
	I0513 23:55:55.467939    4024 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\client.crt ...
	I0513 23:55:55.467939    4024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\client.crt: {Name:mk7c4116a719940a17895dc0d8de9f1203079f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:55:55.469331    4024 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\client.key ...
	I0513 23:55:55.469901    4024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\client.key: {Name:mka098cd028a1772b04304ec329f1e4746f36870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:55:55.470581    4024 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.13a59ca6
	I0513 23:55:55.471107    4024 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.13a59ca6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.106.39]
	I0513 23:55:55.811310    4024 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.13a59ca6 ...
	I0513 23:55:55.811310    4024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.13a59ca6: {Name:mkc5187e44a771cfbf3df0af485e3d1dcf0c58f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:55:55.812393    4024 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.13a59ca6 ...
	I0513 23:55:55.812393    4024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.13a59ca6: {Name:mkf02d1f58fae08c96405c06402aca26ff0d544a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:55:55.814388    4024 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.13a59ca6 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt
	I0513 23:55:55.824091    4024 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.13a59ca6 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key
	I0513 23:55:55.825098    4024 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key
	I0513 23:55:55.825098    4024 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.crt with IP's: []
	I0513 23:55:56.089144    4024 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.crt ...
	I0513 23:55:56.089144    4024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.crt: {Name:mk36fc22c229347c5c5859b37adb080bf4b9bbbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:55:56.090829    4024 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key ...
	I0513 23:55:56.090829    4024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key: {Name:mk250797df26edfb4703786065fe0a604094edb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:55:56.092101    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 23:55:56.092560    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 23:55:56.092740    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 23:55:56.092822    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 23:55:56.092822    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0513 23:55:56.093049    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0513 23:55:56.093150    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0513 23:55:56.103405    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0513 23:55:56.111481    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0513 23:55:56.111481    4024 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0513 23:55:56.112005    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 23:55:56.112181    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 23:55:56.112348    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 23:55:56.112348    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 23:55:56.112348    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0513 23:55:56.112972    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0513 23:55:56.112999    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:55:56.112999    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0513 23:55:56.114269    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 23:55:56.157254    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 23:55:56.191317    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 23:55:56.230314    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 23:55:56.270916    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0513 23:55:56.312335    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 23:55:56.356293    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 23:55:56.401891    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0513 23:55:56.442477    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0513 23:55:56.482849    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 23:55:56.525961    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0513 23:55:56.567455    4024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0513 23:55:56.606038    4024 ssh_runner.go:195] Run: openssl version
	I0513 23:55:56.613978    4024 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0513 23:55:56.622964    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 23:55:56.652557    4024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:55:56.659744    4024 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:55:56.659777    4024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:55:56.668248    4024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:55:56.676962    4024 command_runner.go:130] > b5213941
	I0513 23:55:56.685124    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 23:55:56.710603    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0513 23:55:56.736022    4024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0513 23:55:56.743698    4024 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 23:55:56.743698    4024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 23:55:56.752321    4024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0513 23:55:56.762804    4024 command_runner.go:130] > 51391683
	I0513 23:55:56.770793    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0513 23:55:56.796698    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0513 23:55:56.830208    4024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0513 23:55:56.836584    4024 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 23:55:56.836584    4024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 23:55:56.845107    4024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0513 23:55:56.853128    4024 command_runner.go:130] > 3ec20f2e
	I0513 23:55:56.863413    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 23:55:56.889732    4024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 23:55:56.896261    4024 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 23:55:56.896308    4024 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 23:55:56.896308    4024 kubeadm.go:391] StartCluster: {Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.106.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:55:56.906106    4024 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 23:55:56.941663    4024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0513 23:55:56.959069    4024 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0513 23:55:56.959758    4024 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0513 23:55:56.959758    4024 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0513 23:55:56.968553    4024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 23:55:57.001227    4024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 23:55:57.017001    4024 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0513 23:55:57.017164    4024 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0513 23:55:57.017164    4024 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0513 23:55:57.017164    4024 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 23:55:57.017692    4024 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 23:55:57.017692    4024 kubeadm.go:156] found existing configuration files:
	
	I0513 23:55:57.027235    4024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0513 23:55:57.043699    4024 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 23:55:57.043699    4024 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 23:55:57.052091    4024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 23:55:57.075296    4024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0513 23:55:57.091030    4024 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 23:55:57.091030    4024 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 23:55:57.100982    4024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 23:55:57.124387    4024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0513 23:55:57.139392    4024 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 23:55:57.139392    4024 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 23:55:57.149261    4024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 23:55:57.174931    4024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0513 23:55:57.189087    4024 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 23:55:57.189087    4024 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 23:55:57.199670    4024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 23:55:57.215299    4024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0513 23:55:57.577248    4024 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0513 23:55:57.577339    4024 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0513 23:56:09.816001    4024 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0513 23:56:09.816001    4024 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0513 23:56:09.817195    4024 kubeadm.go:309] [preflight] Running pre-flight checks
	I0513 23:56:09.817259    4024 command_runner.go:130] > [preflight] Running pre-flight checks
	I0513 23:56:09.817402    4024 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0513 23:56:09.817402    4024 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0513 23:56:09.818100    4024 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0513 23:56:09.818100    4024 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0513 23:56:09.818452    4024 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0513 23:56:09.818452    4024 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0513 23:56:09.818714    4024 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0513 23:56:09.818714    4024 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0513 23:56:09.822652    4024 out.go:204]   - Generating certificates and keys ...
	I0513 23:56:09.822652    4024 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0513 23:56:09.822652    4024 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0513 23:56:09.822652    4024 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0513 23:56:09.823180    4024 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0513 23:56:09.823385    4024 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0513 23:56:09.823385    4024 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0513 23:56:09.823618    4024 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0513 23:56:09.823618    4024 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0513 23:56:09.823765    4024 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0513 23:56:09.823804    4024 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0513 23:56:09.823991    4024 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0513 23:56:09.824040    4024 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0513 23:56:09.824070    4024 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0513 23:56:09.824070    4024 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0513 23:56:09.824070    4024 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-101100] and IPs [172.23.106.39 127.0.0.1 ::1]
	I0513 23:56:09.824070    4024 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-101100] and IPs [172.23.106.39 127.0.0.1 ::1]
	I0513 23:56:09.824070    4024 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0513 23:56:09.824070    4024 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0513 23:56:09.824851    4024 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-101100] and IPs [172.23.106.39 127.0.0.1 ::1]
	I0513 23:56:09.824876    4024 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-101100] and IPs [172.23.106.39 127.0.0.1 ::1]
	I0513 23:56:09.824982    4024 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0513 23:56:09.824982    4024 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0513 23:56:09.824982    4024 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0513 23:56:09.824982    4024 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0513 23:56:09.824982    4024 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0513 23:56:09.824982    4024 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0513 23:56:09.824982    4024 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0513 23:56:09.824982    4024 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0513 23:56:09.825560    4024 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0513 23:56:09.825596    4024 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0513 23:56:09.825695    4024 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0513 23:56:09.825763    4024 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0513 23:56:09.825894    4024 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0513 23:56:09.825894    4024 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0513 23:56:09.826031    4024 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0513 23:56:09.826116    4024 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0513 23:56:09.826274    4024 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0513 23:56:09.826274    4024 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0513 23:56:09.826550    4024 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0513 23:56:09.826550    4024 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0513 23:56:09.826618    4024 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0513 23:56:09.826618    4024 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0513 23:56:09.830064    4024 out.go:204]   - Booting up control plane ...
	I0513 23:56:09.830230    4024 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0513 23:56:09.830262    4024 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0513 23:56:09.830262    4024 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0513 23:56:09.830435    4024 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0513 23:56:09.830666    4024 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0513 23:56:09.830666    4024 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0513 23:56:09.831469    4024 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 23:56:09.831517    4024 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 23:56:09.831710    4024 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 23:56:09.831710    4024 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 23:56:09.831825    4024 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0513 23:56:09.831892    4024 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0513 23:56:09.832176    4024 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0513 23:56:09.832176    4024 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0513 23:56:09.832355    4024 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0513 23:56:09.832355    4024 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0513 23:56:09.832355    4024 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002344053s
	I0513 23:56:09.832355    4024 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002344053s
	I0513 23:56:09.832355    4024 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0513 23:56:09.832355    4024 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0513 23:56:09.832355    4024 kubeadm.go:309] [api-check] The API server is healthy after 6.002716251s
	I0513 23:56:09.832355    4024 command_runner.go:130] > [api-check] The API server is healthy after 6.002716251s
	I0513 23:56:09.832993    4024 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0513 23:56:09.832993    4024 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0513 23:56:09.833030    4024 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0513 23:56:09.833030    4024 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0513 23:56:09.833030    4024 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0513 23:56:09.833030    4024 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0513 23:56:09.833739    4024 command_runner.go:130] > [mark-control-plane] Marking the node multinode-101100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0513 23:56:09.833739    4024 kubeadm.go:309] [mark-control-plane] Marking the node multinode-101100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0513 23:56:09.833739    4024 command_runner.go:130] > [bootstrap-token] Using token: l14ebk.bhut2keqx38v0n91
	I0513 23:56:09.833739    4024 kubeadm.go:309] [bootstrap-token] Using token: l14ebk.bhut2keqx38v0n91
	I0513 23:56:09.838124    4024 out.go:204]   - Configuring RBAC rules ...
	I0513 23:56:09.838124    4024 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0513 23:56:09.838124    4024 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0513 23:56:09.838124    4024 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0513 23:56:09.838124    4024 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0513 23:56:09.838124    4024 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0513 23:56:09.838124    4024 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0513 23:56:09.839106    4024 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0513 23:56:09.839106    4024 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0513 23:56:09.839106    4024 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0513 23:56:09.839106    4024 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0513 23:56:09.839106    4024 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0513 23:56:09.839106    4024 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0513 23:56:09.839106    4024 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0513 23:56:09.839106    4024 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0513 23:56:09.839106    4024 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0513 23:56:09.839106    4024 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0513 23:56:09.839106    4024 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0513 23:56:09.839106    4024 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0513 23:56:09.839106    4024 kubeadm.go:309] 
	I0513 23:56:09.840140    4024 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0513 23:56:09.840140    4024 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0513 23:56:09.840140    4024 kubeadm.go:309] 
	I0513 23:56:09.840140    4024 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0513 23:56:09.840140    4024 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0513 23:56:09.840140    4024 kubeadm.go:309] 
	I0513 23:56:09.840140    4024 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0513 23:56:09.840140    4024 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0513 23:56:09.840140    4024 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0513 23:56:09.840140    4024 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0513 23:56:09.840140    4024 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0513 23:56:09.840140    4024 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0513 23:56:09.840140    4024 kubeadm.go:309] 
	I0513 23:56:09.840140    4024 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0513 23:56:09.840140    4024 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0513 23:56:09.840140    4024 kubeadm.go:309] 
	I0513 23:56:09.840140    4024 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0513 23:56:09.840140    4024 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0513 23:56:09.840140    4024 kubeadm.go:309] 
	I0513 23:56:09.841109    4024 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0513 23:56:09.841109    4024 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0513 23:56:09.841109    4024 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0513 23:56:09.841109    4024 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0513 23:56:09.841109    4024 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0513 23:56:09.841109    4024 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0513 23:56:09.841109    4024 kubeadm.go:309] 
	I0513 23:56:09.841109    4024 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0513 23:56:09.841109    4024 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0513 23:56:09.841109    4024 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0513 23:56:09.841109    4024 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0513 23:56:09.841109    4024 kubeadm.go:309] 
	I0513 23:56:09.841109    4024 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token l14ebk.bhut2keqx38v0n91 \
	I0513 23:56:09.841109    4024 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token l14ebk.bhut2keqx38v0n91 \
	I0513 23:56:09.841109    4024 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 \
	I0513 23:56:09.841109    4024 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 \
	I0513 23:56:09.841109    4024 kubeadm.go:309] 	--control-plane 
	I0513 23:56:09.841109    4024 command_runner.go:130] > 	--control-plane 
	I0513 23:56:09.841109    4024 kubeadm.go:309] 
	I0513 23:56:09.842111    4024 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0513 23:56:09.842111    4024 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0513 23:56:09.842111    4024 kubeadm.go:309] 
	I0513 23:56:09.842111    4024 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token l14ebk.bhut2keqx38v0n91 \
	I0513 23:56:09.842111    4024 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token l14ebk.bhut2keqx38v0n91 \
	I0513 23:56:09.842111    4024 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0513 23:56:09.842111    4024 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0513 23:56:09.842111    4024 cni.go:84] Creating CNI manager for ""
	I0513 23:56:09.842111    4024 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0513 23:56:09.844850    4024 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0513 23:56:09.858078    4024 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0513 23:56:09.865173    4024 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0513 23:56:09.865173    4024 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0513 23:56:09.865173    4024 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0513 23:56:09.865173    4024 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0513 23:56:09.865173    4024 command_runner.go:130] > Access: 2024-05-13 23:54:26.313387000 +0000
	I0513 23:56:09.865173    4024 command_runner.go:130] > Modify: 2024-05-09 03:04:38.000000000 +0000
	I0513 23:56:09.865173    4024 command_runner.go:130] > Change: 2024-05-13 23:54:17.493000000 +0000
	I0513 23:56:09.865173    4024 command_runner.go:130] >  Birth: -
	I0513 23:56:09.866522    4024 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0513 23:56:09.866522    4024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0513 23:56:09.909297    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0513 23:56:10.476874    4024 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0513 23:56:10.476874    4024 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0513 23:56:10.476874    4024 command_runner.go:130] > serviceaccount/kindnet created
	I0513 23:56:10.476874    4024 command_runner.go:130] > daemonset.apps/kindnet created
	I0513 23:56:10.476874    4024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0513 23:56:10.487913    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:10.487913    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-101100 minikube.k8s.io/updated_at=2024_05_13T23_56_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=multinode-101100 minikube.k8s.io/primary=true
	I0513 23:56:10.501367    4024 command_runner.go:130] > -16
	I0513 23:56:10.501495    4024 ops.go:34] apiserver oom_adj: -16
	I0513 23:56:10.675601    4024 command_runner.go:130] > node/multinode-101100 labeled
	I0513 23:56:10.690515    4024 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0513 23:56:10.700809    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:10.803881    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:11.211606    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:11.301418    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:11.711445    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:11.804494    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:12.227238    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:12.330774    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:12.707095    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:12.797198    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:13.209013    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:13.310616    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:13.707916    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:13.802405    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:14.208071    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:14.300349    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:14.710043    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:14.807908    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:15.214077    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:15.316460    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:15.713229    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:15.809847    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:16.203915    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:16.298774    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:16.704421    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:16.803257    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:17.206467    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:17.301814    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:17.704933    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:17.806334    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:18.210482    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:18.299079    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:18.708929    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:18.812212    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:19.214223    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:19.311212    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:19.708201    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:19.797853    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:20.216683    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:20.311167    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:20.717172    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:20.834004    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:21.204248    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:21.300471    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:21.707041    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:21.802765    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:22.209722    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:22.308936    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:22.709492    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:22.814568    4024 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0513 23:56:23.212699    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 23:56:23.397859    4024 command_runner.go:130] > NAME      SECRETS   AGE
	I0513 23:56:23.397859    4024 command_runner.go:130] > default   0         0s
	I0513 23:56:23.397859    4024 kubeadm.go:1107] duration metric: took 12.920266s to wait for elevateKubeSystemPrivileges
	W0513 23:56:23.397859    4024 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0513 23:56:23.397859    4024 kubeadm.go:393] duration metric: took 26.5000732s to StartCluster
	I0513 23:56:23.397859    4024 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:56:23.398490    4024 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:56:23.400411    4024 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:56:23.402364    4024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0513 23:56:23.402473    4024 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.106.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 23:56:23.405608    4024 out.go:177] * Verifying Kubernetes components...
	I0513 23:56:23.402540    4024 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 23:56:23.402593    4024 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:56:23.405608    4024 addons.go:69] Setting storage-provisioner=true in profile "multinode-101100"
	I0513 23:56:23.405608    4024 addons.go:69] Setting default-storageclass=true in profile "multinode-101100"
	I0513 23:56:23.409045    4024 addons.go:234] Setting addon storage-provisioner=true in "multinode-101100"
	I0513 23:56:23.409045    4024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-101100"
	I0513 23:56:23.409045    4024 host.go:66] Checking if "multinode-101100" exists ...
	I0513 23:56:23.409951    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:56:23.410008    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:56:23.418770    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:56:23.690317    4024 command_runner.go:130] > apiVersion: v1
	I0513 23:56:23.690317    4024 command_runner.go:130] > data:
	I0513 23:56:23.690317    4024 command_runner.go:130] >   Corefile: |
	I0513 23:56:23.690317    4024 command_runner.go:130] >     .:53 {
	I0513 23:56:23.690317    4024 command_runner.go:130] >         errors
	I0513 23:56:23.690317    4024 command_runner.go:130] >         health {
	I0513 23:56:23.690317    4024 command_runner.go:130] >            lameduck 5s
	I0513 23:56:23.690317    4024 command_runner.go:130] >         }
	I0513 23:56:23.690317    4024 command_runner.go:130] >         ready
	I0513 23:56:23.690317    4024 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0513 23:56:23.690317    4024 command_runner.go:130] >            pods insecure
	I0513 23:56:23.690317    4024 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0513 23:56:23.690317    4024 command_runner.go:130] >            ttl 30
	I0513 23:56:23.690317    4024 command_runner.go:130] >         }
	I0513 23:56:23.690317    4024 command_runner.go:130] >         prometheus :9153
	I0513 23:56:23.690317    4024 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0513 23:56:23.690317    4024 command_runner.go:130] >            max_concurrent 1000
	I0513 23:56:23.690317    4024 command_runner.go:130] >         }
	I0513 23:56:23.690317    4024 command_runner.go:130] >         cache 30
	I0513 23:56:23.690317    4024 command_runner.go:130] >         loop
	I0513 23:56:23.690317    4024 command_runner.go:130] >         reload
	I0513 23:56:23.690317    4024 command_runner.go:130] >         loadbalance
	I0513 23:56:23.690317    4024 command_runner.go:130] >     }
	I0513 23:56:23.690317    4024 command_runner.go:130] > kind: ConfigMap
	I0513 23:56:23.690317    4024 command_runner.go:130] > metadata:
	I0513 23:56:23.690317    4024 command_runner.go:130] >   creationTimestamp: "2024-05-13T23:56:09Z"
	I0513 23:56:23.690317    4024 command_runner.go:130] >   name: coredns
	I0513 23:56:23.690317    4024 command_runner.go:130] >   namespace: kube-system
	I0513 23:56:23.690317    4024 command_runner.go:130] >   resourceVersion: "259"
	I0513 23:56:23.690955    4024 command_runner.go:130] >   uid: 6f046b1e-bd0a-465f-8ce6-dd51880d8b20
	I0513 23:56:23.691019    4024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0513 23:56:23.778968    4024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:56:24.216463    4024 command_runner.go:130] > configmap/coredns replaced
	I0513 23:56:24.216550    4024 start.go:946] {"host.minikube.internal": 172.23.96.1} host record injected into CoreDNS's ConfigMap
	I0513 23:56:24.217834    4024 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:56:24.218695    4024 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:56:24.218695    4024 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.106.39:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 23:56:24.219609    4024 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.106.39:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 23:56:24.219609    4024 cert_rotation.go:137] Starting client certificate rotation controller
	I0513 23:56:24.219609    4024 node_ready.go:35] waiting up to 6m0s for node "multinode-101100" to be "Ready" ...
	I0513 23:56:24.220620    4024 round_trippers.go:463] GET https://172.23.106.39:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0513 23:56:24.220620    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:24.220620    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:24.220620    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:24.220620    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:24.220620    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:24.220620    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:24.220620    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:24.243350    4024 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0513 23:56:24.243350    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:24.243350    4024 round_trippers.go:580]     Audit-Id: 0abca5e0-273b-48aa-b9d8-14889e07a661
	I0513 23:56:24.243350    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:24.243350    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:24.243350    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:24.243350    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:24.243350    4024 round_trippers.go:580]     Content-Length: 291
	I0513 23:56:24.243350    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:24 GMT
	I0513 23:56:24.243350    4024 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3fe508d-1418-45ab-babb-c6fa2ab7be05","resourceVersion":"386","creationTimestamp":"2024-05-13T23:56:09Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0513 23:56:24.243350    4024 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0513 23:56:24.243350    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:24.243350    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:24.243350    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:24 GMT
	I0513 23:56:24.243350    4024 round_trippers.go:580]     Audit-Id: 4f21d83f-3e5b-4af2-b542-833c138036b0
	I0513 23:56:24.243350    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:24.243350    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:24.243350    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:24.243888    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:24.245203    4024 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3fe508d-1418-45ab-babb-c6fa2ab7be05","resourceVersion":"386","creationTimestamp":"2024-05-13T23:56:09Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0513 23:56:24.245203    4024 round_trippers.go:463] PUT https://172.23.106.39:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0513 23:56:24.245203    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:24.245203    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:24.245203    4024 round_trippers.go:473]     Content-Type: application/json
	I0513 23:56:24.245203    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:24.263750    4024 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0513 23:56:24.264705    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:24.264705    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:24.264705    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:24.264705    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:24.264705    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:24.264705    4024 round_trippers.go:580]     Content-Length: 291
	I0513 23:56:24.264705    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:24 GMT
	I0513 23:56:24.264705    4024 round_trippers.go:580]     Audit-Id: be8ab60f-7fd9-49a1-a0ac-169f7fbfc6e7
	I0513 23:56:24.264705    4024 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3fe508d-1418-45ab-babb-c6fa2ab7be05","resourceVersion":"390","creationTimestamp":"2024-05-13T23:56:09Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0513 23:56:24.734858    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:24.734858    4024 round_trippers.go:463] GET https://172.23.106.39:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0513 23:56:24.734858    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:24.734858    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:24.734858    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:24.734858    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:24.734858    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:24.734858    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:24.738935    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:56:24.739283    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:24.739283    4024 round_trippers.go:580]     Audit-Id: 240cc000-afc4-4cbd-aad7-b20a230a7e4e
	I0513 23:56:24.739283    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:24.739283    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:24.739283    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:24.739283    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:24.739283    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:24 GMT
	I0513 23:56:24.739449    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:24.740170    4024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:56:24.740170    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:24.740170    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:24.740170    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:24.740170    4024 round_trippers.go:580]     Content-Length: 291
	I0513 23:56:24.740170    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:24 GMT
	I0513 23:56:24.740170    4024 round_trippers.go:580]     Audit-Id: ab93f95c-af5c-4dc1-80b3-0f9f15ad4592
	I0513 23:56:24.740170    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:24.740170    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:24.740170    4024 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3fe508d-1418-45ab-babb-c6fa2ab7be05","resourceVersion":"400","creationTimestamp":"2024-05-13T23:56:09Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0513 23:56:24.740170    4024 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-101100" context rescaled to 1 replicas
	I0513 23:56:25.226966    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:25.227059    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:25.227059    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:25.227059    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:25.230862    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:25.230862    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:25.230862    4024 round_trippers.go:580]     Audit-Id: 501ae2af-63a0-400c-a841-84b620a64674
	I0513 23:56:25.230862    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:25.230862    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:25.230862    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:25.230862    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:25.230862    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:25 GMT
	I0513 23:56:25.230862    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:25.542308    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:56:25.542308    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:25.542528    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:56:25.542528    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:25.545513    4024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 23:56:25.543310    4024 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:56:25.548799    4024 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 23:56:25.548799    4024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 23:56:25.548799    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:56:25.548799    4024 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.106.39:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 23:56:25.549451    4024 addons.go:234] Setting addon default-storageclass=true in "multinode-101100"
	I0513 23:56:25.549451    4024 host.go:66] Checking if "multinode-101100" exists ...
	I0513 23:56:25.550460    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:56:25.732896    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:25.732896    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:25.732896    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:25.732896    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:25.736906    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:56:25.736906    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:25.736906    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:25 GMT
	I0513 23:56:25.736906    4024 round_trippers.go:580]     Audit-Id: 91d36bc7-f24c-4070-8dcc-e6f54681829e
	I0513 23:56:25.736906    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:25.736906    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:25.736906    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:25.736906    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:25.736906    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:26.225279    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:26.225279    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:26.225279    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:26.225279    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:26.228873    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:26.228873    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:26.229425    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:26.229425    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:26 GMT
	I0513 23:56:26.229425    4024 round_trippers.go:580]     Audit-Id: 5a28b207-4bbd-49c7-b787-1863086096b4
	I0513 23:56:26.229425    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:26.229566    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:26.229566    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:26.229640    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:26.230527    4024 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0513 23:56:26.730689    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:26.730899    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:26.730899    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:26.730899    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:26.734262    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:26.734262    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:26.734262    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:26.734262    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:26 GMT
	I0513 23:56:26.734262    4024 round_trippers.go:580]     Audit-Id: 4abdba3b-3a4d-448c-89b2-c81d1daa1e01
	I0513 23:56:26.734262    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:26.734262    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:26.734262    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:26.734482    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:27.221804    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:27.221804    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:27.221804    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:27.221804    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:27.225692    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:27.225692    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:27.225798    4024 round_trippers.go:580]     Audit-Id: 0f73c0ca-e75d-4456-9085-59056ce5c53d
	I0513 23:56:27.225798    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:27.225798    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:27.225798    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:27.225867    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:27.225867    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:27 GMT
	I0513 23:56:27.226261    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:27.651197    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:56:27.651197    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:27.651197    4024 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 23:56:27.651197    4024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 23:56:27.651197    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:56:27.693954    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:56:27.694155    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:27.694223    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:56:27.729758    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:27.730018    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:27.730018    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:27.730018    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:27.733766    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:27.733766    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:27.733766    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:27.733766    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:27.733766    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:27.733766    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:27 GMT
	I0513 23:56:27.733766    4024 round_trippers.go:580]     Audit-Id: 17da86d0-381c-4721-9c2b-171d3c205de3
	I0513 23:56:27.734539    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:27.734769    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:28.234125    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:28.234125    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:28.234125    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:28.234125    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:28.237697    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:28.237831    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:28.237831    4024 round_trippers.go:580]     Audit-Id: b3296ab5-f9b7-48ad-a487-0c34586201f8
	I0513 23:56:28.237831    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:28.237831    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:28.237904    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:28.237904    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:28.237904    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:28 GMT
	I0513 23:56:28.238787    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:28.239403    4024 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0513 23:56:28.726290    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:28.726290    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:28.726290    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:28.726290    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:28.729918    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:28.730224    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:28.730224    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:28.730224    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:28.730224    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:28.730224    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:28.730224    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:28 GMT
	I0513 23:56:28.730224    4024 round_trippers.go:580]     Audit-Id: 44492524-3fba-4827-a9ec-f39427b2c141
	I0513 23:56:28.730224    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:29.235896    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:29.235896    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:29.235896    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:29.236131    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:29.239616    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:29.240078    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:29.240078    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:29.240078    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:29 GMT
	I0513 23:56:29.240078    4024 round_trippers.go:580]     Audit-Id: 5f74d182-4220-46d7-ac33-d736eca81abb
	I0513 23:56:29.240078    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:29.240078    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:29.240078    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:29.240257    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:29.685255    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:56:29.685255    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:29.685984    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:56:29.724290    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:29.724290    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:29.724365    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:29.724365    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:29.808082    4024 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0513 23:56:29.808438    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:29.808438    4024 round_trippers.go:580]     Audit-Id: e1058fcd-658c-4bdd-99bc-77b03f54072d
	I0513 23:56:29.808438    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:29.808529    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:29.808529    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:29.808529    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:29.808529    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:30 GMT
	I0513 23:56:29.810148    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:30.100327    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:56:30.100425    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:30.100748    4024 sshutil.go:53] new ssh client: &{IP:172.23.106.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0513 23:56:30.223972    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:30.223972    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:30.223972    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:30.223972    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:30.227610    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:30.227996    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:30.227996    4024 round_trippers.go:580]     Audit-Id: 95e3e39f-a263-449e-b8bb-2e93fea0b914
	I0513 23:56:30.227996    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:30.227996    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:30.227996    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:30.228144    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:30.228144    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:30 GMT
	I0513 23:56:30.228704    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:30.259594    4024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 23:56:30.716934    4024 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0513 23:56:30.716934    4024 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0513 23:56:30.716934    4024 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0513 23:56:30.716934    4024 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0513 23:56:30.716934    4024 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0513 23:56:30.716934    4024 command_runner.go:130] > pod/storage-provisioner created
	I0513 23:56:30.734934    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:30.734934    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:30.734934    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:30.734934    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:30.737604    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:30.737802    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:30.737802    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:30.737802    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:30.737802    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:30.737802    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:30.737802    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:30 GMT
	I0513 23:56:30.737802    4024 round_trippers.go:580]     Audit-Id: 23f2cb19-e806-4fcc-8d7a-6972b2fddc8c
	I0513 23:56:30.737802    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:30.738551    4024 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0513 23:56:31.226214    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:31.226214    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:31.226214    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:31.226214    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:31.230460    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:31.230460    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:31.230539    4024 round_trippers.go:580]     Audit-Id: d1ea9ba2-af4a-41b3-bbd9-510a1c1a73e5
	I0513 23:56:31.230539    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:31.230539    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:31.230539    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:31.230539    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:31.230605    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:31 GMT
	I0513 23:56:31.231166    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:31.730466    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:31.730466    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:31.730466    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:31.730466    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:31.736278    4024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:56:31.736278    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:31.736278    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:31.736278    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:31.736278    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:31 GMT
	I0513 23:56:31.736278    4024 round_trippers.go:580]     Audit-Id: 58f17dcf-b6fa-4ffe-b6c9-32b8c3e19c25
	I0513 23:56:31.736278    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:31.736278    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:31.736959    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:32.027787    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:56:32.027787    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:32.028256    4024 sshutil.go:53] new ssh client: &{IP:172.23.106.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0513 23:56:32.166313    4024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 23:56:32.233330    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:32.233330    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:32.233330    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:32.233330    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:32.237908    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:56:32.237959    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:32.237959    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:32 GMT
	I0513 23:56:32.237959    4024 round_trippers.go:580]     Audit-Id: 2ed691aa-ff23-40d5-9ef3-1a2ff690708a
	I0513 23:56:32.238015    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:32.238015    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:32.238015    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:32.238067    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:32.241241    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:32.328661    4024 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0513 23:56:32.329673    4024 round_trippers.go:463] GET https://172.23.106.39:8443/apis/storage.k8s.io/v1/storageclasses
	I0513 23:56:32.329673    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:32.329673    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:32.329673    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:32.332686    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:32.332729    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:32.332759    4024 round_trippers.go:580]     Audit-Id: 2095885a-546d-464c-9399-0b173db7ba10
	I0513 23:56:32.332759    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:32.332759    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:32.332835    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:32.332835    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:32.332835    4024 round_trippers.go:580]     Content-Length: 1273
	I0513 23:56:32.332868    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:32 GMT
	I0513 23:56:32.332912    4024 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"standard","uid":"4cbcb0e4-4f80-440b-8c68-3bb459f86a89","resourceVersion":"422","creationTimestamp":"2024-05-13T23:56:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-13T23:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0513 23:56:32.333619    4024 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4cbcb0e4-4f80-440b-8c68-3bb459f86a89","resourceVersion":"422","creationTimestamp":"2024-05-13T23:56:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-13T23:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0513 23:56:32.333734    4024 round_trippers.go:463] PUT https://172.23.106.39:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0513 23:56:32.333768    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:32.333768    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:32.333811    4024 round_trippers.go:473]     Content-Type: application/json
	I0513 23:56:32.333811    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:32.337530    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:32.337669    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:32.337669    4024 round_trippers.go:580]     Audit-Id: 79645fd7-a269-4381-8031-5b206fd63379
	I0513 23:56:32.337669    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:32.337669    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:32.337669    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:32.337737    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:32.337737    4024 round_trippers.go:580]     Content-Length: 1220
	I0513 23:56:32.337737    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:32 GMT
	I0513 23:56:32.337737    4024 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4cbcb0e4-4f80-440b-8c68-3bb459f86a89","resourceVersion":"422","creationTimestamp":"2024-05-13T23:56:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-13T23:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0513 23:56:32.341551    4024 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0513 23:56:32.343794    4024 addons.go:505] duration metric: took 8.9408231s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0513 23:56:32.734609    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:32.734609    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:32.734609    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:32.734609    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:32.738452    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:32.738556    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:32.738556    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:32.738638    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:32.738638    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:32.738638    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:32.738638    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:32 GMT
	I0513 23:56:32.738638    4024 round_trippers.go:580]     Audit-Id: 6167f61c-5a05-4a45-8a10-26a77e99a733
	I0513 23:56:32.738705    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:32.739570    4024 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0513 23:56:33.234504    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:33.234687    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:33.234687    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:33.234687    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:33.241304    4024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:56:33.241304    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:33.241304    4024 round_trippers.go:580]     Audit-Id: f8bd796e-6c9e-444a-bfe3-9568de1862ed
	I0513 23:56:33.241304    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:33.241304    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:33.241304    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:33.241304    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:33.241304    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:33 GMT
	I0513 23:56:33.242361    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:33.735313    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:33.735313    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:33.735313    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:33.735452    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:33.739363    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:33.739363    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:33.739363    4024 round_trippers.go:580]     Audit-Id: 0d9de026-318f-48c6-823b-51eca3a4210b
	I0513 23:56:33.739363    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:33.739363    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:33.739363    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:33.739363    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:33.739363    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:33 GMT
	I0513 23:56:33.740364    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:34.233513    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:34.233606    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:34.233606    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:34.233606    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:34.236984    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:34.236984    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:34.236984    4024 round_trippers.go:580]     Audit-Id: d2d0270d-3f5a-445c-b2df-e43a277394c6
	I0513 23:56:34.236984    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:34.236984    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:34.236984    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:34.236984    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:34.236984    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:34 GMT
	I0513 23:56:34.237711    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:34.736497    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:34.736497    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:34.736497    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:34.736497    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:34.739165    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:34.739165    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:34.739165    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:34.739165    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:34.739165    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:34.739165    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:34.739165    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:34 GMT
	I0513 23:56:34.739165    4024 round_trippers.go:580]     Audit-Id: 778c3c9c-8be5-4783-b560-4e73451a8b02
	I0513 23:56:34.740511    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"353","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0513 23:56:34.741088    4024 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0513 23:56:35.234081    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:35.234081    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:35.234081    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:35.234081    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:35.238396    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:56:35.238396    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:35.238396    4024 round_trippers.go:580]     Audit-Id: 93af0c8b-2d19-417a-b716-4ddb81469da7
	I0513 23:56:35.238396    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:35.238396    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:35.238396    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:35.238396    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:35.238396    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:35 GMT
	I0513 23:56:35.239340    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0513 23:56:35.239340    4024 node_ready.go:49] node "multinode-101100" has status "Ready":"True"
	I0513 23:56:35.239340    4024 node_ready.go:38] duration metric: took 11.0181063s for node "multinode-101100" to be "Ready" ...
	I0513 23:56:35.239340    4024 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:56:35.239879    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods
	I0513 23:56:35.239879    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:35.239879    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:35.239879    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:35.259844    4024 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0513 23:56:35.259844    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:35.259844    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:35.260325    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:35.260325    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:35 GMT
	I0513 23:56:35.260325    4024 round_trippers.go:580]     Audit-Id: 71b41feb-cd1d-4855-9f81-929a57de6950
	I0513 23:56:35.260325    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:35.260325    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:35.262130    4024 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"426","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0513 23:56:35.266233    4024 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:35.266414    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0513 23:56:35.266414    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:35.266414    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:35.266414    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:35.275574    4024 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0513 23:56:35.275574    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:35.275754    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:35.275754    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:35 GMT
	I0513 23:56:35.275754    4024 round_trippers.go:580]     Audit-Id: 020a2c9e-1c80-449c-8515-bdee89fda2be
	I0513 23:56:35.275754    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:35.275754    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:35.275754    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:35.275996    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"426","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4762 chars]
	I0513 23:56:35.276551    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:35.276602    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:35.276602    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:35.276602    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:35.281743    4024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:56:35.281743    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:35.281743    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:35.281743    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:35.281743    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:35.281743    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:35.281743    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:35 GMT
	I0513 23:56:35.281743    4024 round_trippers.go:580]     Audit-Id: c2294519-6e5d-4937-b51d-e4cdeff1d2d2
	I0513 23:56:35.284297    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0513 23:56:35.769915    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0513 23:56:35.770115    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:35.770170    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:35.770170    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:35.773388    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:35.773607    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:35.773607    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:35.773607    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:35.773607    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:35 GMT
	I0513 23:56:35.773607    4024 round_trippers.go:580]     Audit-Id: dca68ee2-20bc-4290-a3e5-de16a8235a35
	I0513 23:56:35.773607    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:35.773607    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:35.774386    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"430","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0513 23:56:35.775232    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:35.775307    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:35.775307    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:35.775307    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:35.777389    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:35.777727    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:35.777727    4024 round_trippers.go:580]     Audit-Id: 1d2ffc60-c4aa-43af-8af0-d9c8e15f3acc
	I0513 23:56:35.777727    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:35.777727    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:35.777727    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:35.777727    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:35.777727    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:35 GMT
	I0513 23:56:35.778001    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0513 23:56:36.280511    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0513 23:56:36.280598    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.280598    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.280598    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.284884    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:56:36.284884    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.284884    4024 round_trippers.go:580]     Audit-Id: b9cca5b0-1d2c-488d-8ee9-ecdeb5b3b444
	I0513 23:56:36.284884    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.284884    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.284884    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.284884    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.284884    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:36 GMT
	I0513 23:56:36.284884    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"430","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0513 23:56:36.285879    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:36.285879    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.285879    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.285879    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.287891    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:36.288889    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.288889    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:36 GMT
	I0513 23:56:36.288889    4024 round_trippers.go:580]     Audit-Id: fbfbf59e-dda2-4f79-996e-480c9f803a59
	I0513 23:56:36.288889    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.288889    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.288889    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.288889    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.288889    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0513 23:56:36.771713    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0513 23:56:36.771887    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.771887    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.771887    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.775100    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:36.776070    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.776070    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.776070    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.776070    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:36 GMT
	I0513 23:56:36.776070    4024 round_trippers.go:580]     Audit-Id: 99eeb656-6763-4089-ad5a-90a690435dd2
	I0513 23:56:36.776070    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.776070    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.776560    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"442","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0513 23:56:36.777455    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:36.777455    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.777544    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.777544    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.779855    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:36.779855    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.779855    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.779855    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.779855    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.779855    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.779855    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:36 GMT
	I0513 23:56:36.779855    4024 round_trippers.go:580]     Audit-Id: 232dadcc-62b4-424c-84bb-d87e42ae7ccf
	I0513 23:56:36.781009    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0513 23:56:36.781641    4024 pod_ready.go:92] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"True"
	I0513 23:56:36.781641    4024 pod_ready.go:81] duration metric: took 1.515235s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:36.781716    4024 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:36.781872    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-101100
	I0513 23:56:36.781872    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.781872    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.781872    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.784600    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:36.784600    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.784600    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.784600    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.784600    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.784600    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:36 GMT
	I0513 23:56:36.784600    4024 round_trippers.go:580]     Audit-Id: e041d53c-4bd6-4fe7-aea9-8d45890640ca
	I0513 23:56:36.784600    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.785059    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"cd31d030-75f8-4abb-bcad-34031cec7aa6","resourceVersion":"328","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.106.39:2379","kubernetes.io/config.hash":"1af4b764a5249ff25d3c1c709387c273","kubernetes.io/config.mirror":"1af4b764a5249ff25d3c1c709387c273","kubernetes.io/config.seen":"2024-05-13T23:56:09.392109641Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0513 23:56:36.785711    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:36.785792    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.785792    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.785792    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.788711    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:36.788711    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.788711    4024 round_trippers.go:580]     Audit-Id: 0d157b93-220f-4ad1-bed4-02c49026e68c
	I0513 23:56:36.788711    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.788711    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.788711    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.788711    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.788711    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:36.788711    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0513 23:56:36.788711    4024 pod_ready.go:92] pod "etcd-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:56:36.788711    4024 pod_ready.go:81] duration metric: took 6.9944ms for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:36.789721    4024 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:36.789775    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-101100
	I0513 23:56:36.789829    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.789829    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.789829    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.792765    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:36.792765    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.792765    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:36.792765    4024 round_trippers.go:580]     Audit-Id: 80a46ce0-7365-4e10-b78b-642f1db01536
	I0513 23:56:36.792765    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.792765    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.792765    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.792765    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.793148    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-101100","namespace":"kube-system","uid":"1d9c79a4-1e4a-46fb-b3e8-02a4775f40af","resourceVersion":"312","creationTimestamp":"2024-05-13T23:56:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.106.39:8443","kubernetes.io/config.hash":"03d9b35578220c9e99f77722d9aa294f","kubernetes.io/config.mirror":"03d9b35578220c9e99f77722d9aa294f","kubernetes.io/config.seen":"2024-05-13T23:56:02.155854146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0513 23:56:36.793678    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:36.793678    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.793678    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.793678    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.795719    4024 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 23:56:36.796724    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.796724    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.796724    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.796724    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:36.796724    4024 round_trippers.go:580]     Audit-Id: ab279c39-ebc1-41c6-aa18-de2bafc5ce27
	I0513 23:56:36.796724    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.796724    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.796841    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0513 23:56:36.797306    4024 pod_ready.go:92] pod "kube-apiserver-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:56:36.797368    4024 pod_ready.go:81] duration metric: took 7.6466ms for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:36.797368    4024 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:36.797455    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-101100
	I0513 23:56:36.797513    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.797513    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.797513    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.801749    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:56:36.801781    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.801781    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.801781    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.801781    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.801781    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.801781    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:36.801781    4024 round_trippers.go:580]     Audit-Id: b4cf85c0-1688-48c3-b46f-1a4ae3631fa1
	I0513 23:56:36.802141    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-101100","namespace":"kube-system","uid":"1a74381a-7477-4fd3-b344-c4a230014f97","resourceVersion":"308","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.mirror":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.seen":"2024-05-13T23:56:09.392106640Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0513 23:56:36.802544    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:36.802544    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.802544    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.802544    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.821796    4024 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0513 23:56:36.821796    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.821796    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:36.821796    4024 round_trippers.go:580]     Audit-Id: 5f0761b3-478b-46c3-866e-ff5d755507f5
	I0513 23:56:36.821796    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.821796    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.821796    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.821796    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.821796    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0513 23:56:36.822500    4024 pod_ready.go:92] pod "kube-controller-manager-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:56:36.822500    4024 pod_ready.go:81] duration metric: took 25.1312ms for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:36.822500    4024 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:36.822628    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0513 23:56:36.822628    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.822628    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.822628    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.825238    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:36.825238    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.825238    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:36.825238    4024 round_trippers.go:580]     Audit-Id: 0ca93e23-ba95-40e1-a9d2-bdd3022db38d
	I0513 23:56:36.825238    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.825238    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.825238    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.825238    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.825238    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zhcz6","generateName":"kube-proxy-","namespace":"kube-system","uid":"a9a488af-41ba-47f3-87b0-5a2f062afad6","resourceVersion":"403","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0513 23:56:36.849969    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:36.849969    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:36.849969    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:36.849969    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:36.852560    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:36.852560    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:36.852560    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:36.852560    4024 round_trippers.go:580]     Audit-Id: 7bc4ab16-a058-46fd-863a-65f87db20663
	I0513 23:56:36.852560    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:36.852560    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:36.852560    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:36.852560    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:36.853566    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0513 23:56:36.853805    4024 pod_ready.go:92] pod "kube-proxy-zhcz6" in "kube-system" namespace has status "Ready":"True"
	I0513 23:56:36.853805    4024 pod_ready.go:81] duration metric: took 31.3026ms for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:36.853805    4024 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:37.037780    4024 request.go:629] Waited for 183.413ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0513 23:56:37.037870    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0513 23:56:37.037870    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:37.037870    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:37.037870    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:37.040238    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:37.040911    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:37.040911    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:37.040911    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:37.041002    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:37.041002    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:37.041002    4024 round_trippers.go:580]     Audit-Id: 680bc234-d9b8-4d7c-8543-d74d7b932184
	I0513 23:56:37.041002    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:37.041227    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-101100","namespace":"kube-system","uid":"d7300c2d-377f-4061-bd34-5f7593b7e827","resourceVersion":"306","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.mirror":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.seen":"2024-05-13T23:56:09.392108241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0513 23:56:37.240880    4024 request.go:629] Waited for 198.807ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:37.240996    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:56:37.240996    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:37.241261    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:37.241261    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:37.244185    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:37.244185    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:37.244185    4024 round_trippers.go:580]     Audit-Id: e56c7a15-21b3-4744-853f-b927eba195bd
	I0513 23:56:37.244185    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:37.244185    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:37.244185    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:37.244185    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:37.244185    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:37.244654    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0513 23:56:37.245157    4024 pod_ready.go:92] pod "kube-scheduler-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:56:37.245157    4024 pod_ready.go:81] duration metric: took 391.3303ms for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:56:37.245195    4024 pod_ready.go:38] duration metric: took 2.0057047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:56:37.245195    4024 api_server.go:52] waiting for apiserver process to appear ...
	I0513 23:56:37.253835    4024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:56:37.277955    4024 command_runner.go:130] > 1934
	I0513 23:56:37.278248    4024 api_server.go:72] duration metric: took 13.8749351s to wait for apiserver process to appear ...
	I0513 23:56:37.278248    4024 api_server.go:88] waiting for apiserver healthz status ...
	I0513 23:56:37.278349    4024 api_server.go:253] Checking apiserver healthz at https://172.23.106.39:8443/healthz ...
	I0513 23:56:37.286048    4024 api_server.go:279] https://172.23.106.39:8443/healthz returned 200:
	ok
	I0513 23:56:37.286334    4024 round_trippers.go:463] GET https://172.23.106.39:8443/version
	I0513 23:56:37.286334    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:37.286334    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:37.286334    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:37.287530    4024 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 23:56:37.287530    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:37.287530    4024 round_trippers.go:580]     Audit-Id: a0e3e15f-a12c-4e47-98d9-5cd93690888e
	I0513 23:56:37.287530    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:37.287530    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:37.287530    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:37.287530    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:37.287530    4024 round_trippers.go:580]     Content-Length: 263
	I0513 23:56:37.287530    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:37.287530    4024 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0513 23:56:37.288083    4024 api_server.go:141] control plane version: v1.30.0
	I0513 23:56:37.288118    4024 api_server.go:131] duration metric: took 9.8697ms to wait for apiserver health ...
	I0513 23:56:37.288118    4024 system_pods.go:43] waiting for kube-system pods to appear ...
	I0513 23:56:37.444683    4024 request.go:629] Waited for 156.3998ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods
	I0513 23:56:37.444779    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods
	I0513 23:56:37.444779    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:37.445022    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:37.445022    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:37.453271    4024 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:56:37.453271    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:37.453271    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:37.453271    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:37.453271    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:37.453271    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:37.453271    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:37.453271    4024 round_trippers.go:580]     Audit-Id: e85d556f-2c2c-4dde-bd27-8dcedeec9911
	I0513 23:56:37.454167    4024 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"442","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0513 23:56:37.458066    4024 system_pods.go:59] 8 kube-system pods found
	I0513 23:56:37.458137    4024 system_pods.go:61] "coredns-7db6d8ff4d-4kmx4" [06858a47-f51b-48d8-a2a6-f60b8107be13] Running
	I0513 23:56:37.458137    4024 system_pods.go:61] "etcd-multinode-101100" [cd31d030-75f8-4abb-bcad-34031cec7aa6] Running
	I0513 23:56:37.458137    4024 system_pods.go:61] "kindnet-9q2tv" [5b3ee167-f21f-46b3-bace-03a7233717e0] Running
	I0513 23:56:37.458137    4024 system_pods.go:61] "kube-apiserver-multinode-101100" [1d9c79a4-1e4a-46fb-b3e8-02a4775f40af] Running
	I0513 23:56:37.458137    4024 system_pods.go:61] "kube-controller-manager-multinode-101100" [1a74381a-7477-4fd3-b344-c4a230014f97] Running
	I0513 23:56:37.458137    4024 system_pods.go:61] "kube-proxy-zhcz6" [a9a488af-41ba-47f3-87b0-5a2f062afad6] Running
	I0513 23:56:37.458137    4024 system_pods.go:61] "kube-scheduler-multinode-101100" [d7300c2d-377f-4061-bd34-5f7593b7e827] Running
	I0513 23:56:37.458137    4024 system_pods.go:61] "storage-provisioner" [a92f04b8-a93f-42d8-81d7-d4da6bf2e247] Running
	I0513 23:56:37.458216    4024 system_pods.go:74] duration metric: took 170.0095ms to wait for pod list to return data ...
	I0513 23:56:37.458216    4024 default_sa.go:34] waiting for default service account to be created ...
	I0513 23:56:37.647833    4024 request.go:629] Waited for 189.1412ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/namespaces/default/serviceaccounts
	I0513 23:56:37.647973    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/default/serviceaccounts
	I0513 23:56:37.647973    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:37.647973    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:37.647973    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:37.650573    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:56:37.650573    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:37.650573    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:37 GMT
	I0513 23:56:37.650573    4024 round_trippers.go:580]     Audit-Id: 7738bc7d-eaf5-41a6-9228-61ad307f74e1
	I0513 23:56:37.650573    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:37.650573    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:37.650573    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:37.650573    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:37.650573    4024 round_trippers.go:580]     Content-Length: 261
	I0513 23:56:37.650573    4024 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f8245e64-9479-49b1-8b02-d2e6351373e3","resourceVersion":"345","creationTimestamp":"2024-05-13T23:56:23Z"}}]}
	I0513 23:56:37.651747    4024 default_sa.go:45] found service account: "default"
	I0513 23:56:37.651747    4024 default_sa.go:55] duration metric: took 193.5206ms for default service account to be created ...
	I0513 23:56:37.651816    4024 system_pods.go:116] waiting for k8s-apps to be running ...
	I0513 23:56:37.835245    4024 request.go:629] Waited for 183.095ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods
	I0513 23:56:37.835245    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods
	I0513 23:56:37.835473    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:37.835473    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:37.835473    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:37.839984    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:56:37.839984    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:37.839984    4024 round_trippers.go:580]     Audit-Id: 17778103-eadb-4be9-9949-8ccec34b62ef
	I0513 23:56:37.839984    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:37.839984    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:37.839984    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:37.839984    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:37.839984    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:38 GMT
	I0513 23:56:37.841220    4024 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"442","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0513 23:56:37.843937    4024 system_pods.go:86] 8 kube-system pods found
	I0513 23:56:37.843937    4024 system_pods.go:89] "coredns-7db6d8ff4d-4kmx4" [06858a47-f51b-48d8-a2a6-f60b8107be13] Running
	I0513 23:56:37.844039    4024 system_pods.go:89] "etcd-multinode-101100" [cd31d030-75f8-4abb-bcad-34031cec7aa6] Running
	I0513 23:56:37.844039    4024 system_pods.go:89] "kindnet-9q2tv" [5b3ee167-f21f-46b3-bace-03a7233717e0] Running
	I0513 23:56:37.844039    4024 system_pods.go:89] "kube-apiserver-multinode-101100" [1d9c79a4-1e4a-46fb-b3e8-02a4775f40af] Running
	I0513 23:56:37.844039    4024 system_pods.go:89] "kube-controller-manager-multinode-101100" [1a74381a-7477-4fd3-b344-c4a230014f97] Running
	I0513 23:56:37.844039    4024 system_pods.go:89] "kube-proxy-zhcz6" [a9a488af-41ba-47f3-87b0-5a2f062afad6] Running
	I0513 23:56:37.844039    4024 system_pods.go:89] "kube-scheduler-multinode-101100" [d7300c2d-377f-4061-bd34-5f7593b7e827] Running
	I0513 23:56:37.844039    4024 system_pods.go:89] "storage-provisioner" [a92f04b8-a93f-42d8-81d7-d4da6bf2e247] Running
	I0513 23:56:37.844039    4024 system_pods.go:126] duration metric: took 192.2123ms to wait for k8s-apps to be running ...
	I0513 23:56:37.844039    4024 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 23:56:37.851858    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:56:37.875148    4024 system_svc.go:56] duration metric: took 31.1067ms WaitForService to wait for kubelet
	I0513 23:56:37.875293    4024 kubeadm.go:576] duration metric: took 14.4719467s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:56:37.875293    4024 node_conditions.go:102] verifying NodePressure condition ...
	I0513 23:56:38.038976    4024 request.go:629] Waited for 163.53ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/nodes
	I0513 23:56:38.039162    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes
	I0513 23:56:38.039162    4024 round_trippers.go:469] Request Headers:
	I0513 23:56:38.039162    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:56:38.039162    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:56:38.042600    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:56:38.043248    4024 round_trippers.go:577] Response Headers:
	I0513 23:56:38.043248    4024 round_trippers.go:580]     Audit-Id: 1801401c-a180-4d1d-97c1-021484c8ac32
	I0513 23:56:38.043248    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:56:38.043321    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:56:38.043353    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:56:38.043353    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:56:38.043353    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:56:38 GMT
	I0513 23:56:38.043473    4024 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"425","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0513 23:56:38.044345    4024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:56:38.044406    4024 node_conditions.go:123] node cpu capacity is 2
	I0513 23:56:38.044406    4024 node_conditions.go:105] duration metric: took 169.1036ms to run NodePressure ...
	I0513 23:56:38.044519    4024 start.go:240] waiting for startup goroutines ...
	I0513 23:56:38.044519    4024 start.go:245] waiting for cluster config update ...
	I0513 23:56:38.044519    4024 start.go:254] writing updated cluster config ...
	I0513 23:56:38.049366    4024 out.go:177] 
	I0513 23:56:38.052101    4024 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:56:38.059314    4024 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:56:38.059314    4024 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0513 23:56:38.064072    4024 out.go:177] * Starting "multinode-101100-m02" worker node in "multinode-101100" cluster
	I0513 23:56:38.068356    4024 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 23:56:38.068356    4024 cache.go:56] Caching tarball of preloaded images
	I0513 23:56:38.068630    4024 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0513 23:56:38.068630    4024 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 23:56:38.069163    4024 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0513 23:56:38.072070    4024 start.go:360] acquireMachinesLock for multinode-101100-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 23:56:38.072701    4024 start.go:364] duration metric: took 630.9µs to acquireMachinesLock for "multinode-101100-m02"
	I0513 23:56:38.072885    4024 start.go:93] Provisioning new machine with config: &{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.106.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0513 23:56:38.072885    4024 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0513 23:56:38.075679    4024 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 23:56:38.076211    4024 start.go:159] libmachine.API.Create for "multinode-101100" (driver="hyperv")
	I0513 23:56:38.076211    4024 client.go:168] LocalClient.Create starting
	I0513 23:56:38.076355    4024 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0513 23:56:38.076355    4024 main.go:141] libmachine: Decoding PEM data...
	I0513 23:56:38.076355    4024 main.go:141] libmachine: Parsing certificate...
	I0513 23:56:38.076881    4024 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0513 23:56:38.077000    4024 main.go:141] libmachine: Decoding PEM data...
	I0513 23:56:38.077000    4024 main.go:141] libmachine: Parsing certificate...
	I0513 23:56:38.077000    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0513 23:56:39.728087    4024 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0513 23:56:39.728087    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:39.728375    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0513 23:56:41.298282    4024 main.go:141] libmachine: [stdout =====>] : False
	
	I0513 23:56:41.298282    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:41.298282    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 23:56:42.633674    4024 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 23:56:42.633902    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:42.633902    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 23:56:45.907114    4024 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 23:56:45.907401    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:45.909304    4024 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0513 23:56:46.252476    4024 main.go:141] libmachine: Creating SSH key...
	I0513 23:56:46.610713    4024 main.go:141] libmachine: Creating VM...
	I0513 23:56:46.610713    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0513 23:56:49.213320    4024 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0513 23:56:49.213320    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:49.214277    4024 main.go:141] libmachine: Using switch "Default Switch"
	I0513 23:56:49.214367    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0513 23:56:50.744737    4024 main.go:141] libmachine: [stdout =====>] : True
	
	I0513 23:56:50.744992    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:50.744992    4024 main.go:141] libmachine: Creating VHD
	I0513 23:56:50.744992    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0513 23:56:54.201547    4024 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3475AEDB-8C57-443A-BDCA-1EBC80010BA1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0513 23:56:54.202496    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:54.202496    4024 main.go:141] libmachine: Writing magic tar header
	I0513 23:56:54.202571    4024 main.go:141] libmachine: Writing SSH key tar header
	I0513 23:56:54.210980    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0513 23:56:57.149323    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:56:57.150231    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:57.150231    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\disk.vhd' -SizeBytes 20000MB
	I0513 23:56:59.469263    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:56:59.469263    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:56:59.469727    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-101100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0513 23:57:02.711712    4024 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-101100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0513 23:57:02.711712    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:02.711712    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-101100-m02 -DynamicMemoryEnabled $false
	I0513 23:57:04.758706    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:57:04.758706    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:04.758796    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-101100-m02 -Count 2
	I0513 23:57:06.721119    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:57:06.721301    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:06.721301    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-101100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\boot2docker.iso'
	I0513 23:57:09.011412    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:57:09.012242    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:09.012297    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-101100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\disk.vhd'
	I0513 23:57:11.372214    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:57:11.372214    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:11.372214    4024 main.go:141] libmachine: Starting VM...
	I0513 23:57:11.372290    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-101100-m02
	I0513 23:57:14.062225    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:57:14.062225    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:14.062225    4024 main.go:141] libmachine: Waiting for host to start...
	I0513 23:57:14.062355    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:57:16.078389    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:57:16.078478    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:16.078654    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:57:18.294610    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:57:18.294636    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:19.307040    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:57:21.308235    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:57:21.308454    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:21.308571    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:57:23.572550    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:57:23.572947    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:24.589155    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:57:26.562284    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:57:26.562284    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:26.562749    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:57:28.853007    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:57:28.853007    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:29.857255    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:57:31.877532    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:57:31.877532    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:31.877848    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:57:34.119542    4024 main.go:141] libmachine: [stdout =====>] : 
	I0513 23:57:34.119542    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:35.120896    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:57:37.077635    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:57:37.077685    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:37.077813    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:57:39.417199    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:57:39.417199    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:39.417961    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:57:41.346915    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:57:41.346915    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:41.346915    4024 machine.go:94] provisionDockerMachine start ...
	I0513 23:57:41.346915    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:57:43.237453    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:57:43.237453    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:43.237453    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:57:45.456099    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:57:45.456505    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:45.460235    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:57:45.460735    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.58 22 <nil> <nil>}
	I0513 23:57:45.460735    4024 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 23:57:45.589209    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 23:57:45.589332    4024 buildroot.go:166] provisioning hostname "multinode-101100-m02"
	I0513 23:57:45.589439    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:57:47.451998    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:57:47.452824    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:47.452893    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:57:49.674913    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:57:49.675811    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:49.679600    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:57:49.679750    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.58 22 <nil> <nil>}
	I0513 23:57:49.679750    4024 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-101100-m02 && echo "multinode-101100-m02" | sudo tee /etc/hostname
	I0513 23:57:49.830823    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101100-m02
	
	I0513 23:57:49.830823    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:57:51.702946    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:57:51.702946    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:51.703746    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:57:53.939964    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:57:53.940621    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:53.944351    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:57:53.944431    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.58 22 <nil> <nil>}
	I0513 23:57:53.944431    4024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-101100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-101100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-101100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 23:57:54.086693    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 23:57:54.086780    4024 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0513 23:57:54.086780    4024 buildroot.go:174] setting up certificates
	I0513 23:57:54.086780    4024 provision.go:84] configureAuth start
	I0513 23:57:54.086780    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:57:55.954826    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:57:55.955300    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:55.955300    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:57:58.154501    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:57:58.154501    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:57:58.154501    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:00.017705    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:00.017705    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:00.018612    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:02.309081    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:02.309160    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:02.309160    4024 provision.go:143] copyHostCerts
	I0513 23:58:02.309325    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0513 23:58:02.309537    4024 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0513 23:58:02.309594    4024 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0513 23:58:02.309926    4024 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0513 23:58:02.310787    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0513 23:58:02.310949    4024 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0513 23:58:02.311023    4024 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0513 23:58:02.311403    4024 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0513 23:58:02.311939    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0513 23:58:02.312474    4024 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0513 23:58:02.312474    4024 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0513 23:58:02.312778    4024 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0513 23:58:02.313529    4024 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-101100-m02 san=[127.0.0.1 172.23.109.58 localhost minikube multinode-101100-m02]
	I0513 23:58:02.490161    4024 provision.go:177] copyRemoteCerts
	I0513 23:58:02.498401    4024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 23:58:02.498401    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:04.363877    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:04.363877    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:04.364644    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:06.603386    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:06.603386    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:06.603862    4024 sshutil.go:53] new ssh client: &{IP:172.23.109.58 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0513 23:58:06.706728    4024 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2080818s)
	I0513 23:58:06.706728    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0513 23:58:06.706728    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 23:58:06.747335    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0513 23:58:06.747335    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0513 23:58:06.788992    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0513 23:58:06.790001    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0513 23:58:06.834351    4024 provision.go:87] duration metric: took 12.7468281s to configureAuth
	I0513 23:58:06.834351    4024 buildroot.go:189] setting minikube options for container-runtime
	I0513 23:58:06.834881    4024 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:58:06.834923    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:08.691909    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:08.692200    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:08.692200    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:10.936309    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:10.936309    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:10.940183    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:58:10.940444    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.58 22 <nil> <nil>}
	I0513 23:58:10.940444    4024 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 23:58:11.072728    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 23:58:11.072819    4024 buildroot.go:70] root file system type: tmpfs
	I0513 23:58:11.073010    4024 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 23:58:11.073164    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:12.982268    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:12.982338    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:12.982395    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:15.254893    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:15.254893    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:15.258361    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:58:15.258634    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.58 22 <nil> <nil>}
	I0513 23:58:15.258634    4024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.106.39"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 23:58:15.412485    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.106.39
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 23:58:15.412541    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:17.293067    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:17.293207    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:17.293281    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:19.550948    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:19.550948    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:19.555555    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:58:19.555914    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.58 22 <nil> <nil>}
	I0513 23:58:19.555991    4024 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 23:58:21.613312    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 23:58:21.613381    4024 machine.go:97] duration metric: took 40.2641198s to provisionDockerMachine
	I0513 23:58:21.613381    4024 client.go:171] duration metric: took 1m43.5312031s to LocalClient.Create
	I0513 23:58:21.613450    4024 start.go:167] duration metric: took 1m43.5312715s to libmachine.API.Create "multinode-101100"
	I0513 23:58:21.613450    4024 start.go:293] postStartSetup for "multinode-101100-m02" (driver="hyperv")
	I0513 23:58:21.613450    4024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 23:58:21.621883    4024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 23:58:21.621883    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:23.493384    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:23.493782    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:23.493782    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:25.755893    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:25.755893    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:25.756222    4024 sshutil.go:53] new ssh client: &{IP:172.23.109.58 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0513 23:58:25.857573    4024 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2353895s)
	I0513 23:58:25.865351    4024 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 23:58:25.872251    4024 command_runner.go:130] > NAME=Buildroot
	I0513 23:58:25.872251    4024 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0513 23:58:25.872251    4024 command_runner.go:130] > ID=buildroot
	I0513 23:58:25.872251    4024 command_runner.go:130] > VERSION_ID=2023.02.9
	I0513 23:58:25.872251    4024 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0513 23:58:25.872251    4024 info.go:137] Remote host: Buildroot 2023.02.9
	I0513 23:58:25.872251    4024 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0513 23:58:25.872251    4024 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0513 23:58:25.873679    4024 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0513 23:58:25.873679    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0513 23:58:25.883926    4024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 23:58:25.903243    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0513 23:58:25.947934    4024 start.go:296] duration metric: took 4.3342293s for postStartSetup
	I0513 23:58:25.949940    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:27.828565    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:27.828565    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:27.828637    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:30.075235    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:30.075235    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:30.075589    4024 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0513 23:58:30.077246    4024 start.go:128] duration metric: took 1m51.9978966s to createHost
	I0513 23:58:30.077320    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:31.962304    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:31.962377    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:31.962450    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:34.206215    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:34.206215    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:34.210754    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:58:34.211359    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.58 22 <nil> <nil>}
	I0513 23:58:34.211359    4024 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 23:58:34.345146    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715644714.567022485
	
	I0513 23:58:34.345146    4024 fix.go:216] guest clock: 1715644714.567022485
	I0513 23:58:34.345266    4024 fix.go:229] Guest: 2024-05-13 23:58:34.567022485 +0000 UTC Remote: 2024-05-13 23:58:30.0773208 +0000 UTC m=+306.245885601 (delta=4.489701685s)
	I0513 23:58:34.345363    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:36.207712    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:36.207712    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:36.207790    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:38.429546    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:38.429546    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:38.433294    4024 main.go:141] libmachine: Using SSH client type: native
	I0513 23:58:38.433712    4024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.109.58 22 <nil> <nil>}
	I0513 23:58:38.433712    4024 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715644714
	I0513 23:58:38.577813    4024 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon May 13 23:58:34 UTC 2024
	
	I0513 23:58:38.577813    4024 fix.go:236] clock set: Mon May 13 23:58:34 UTC 2024
	 (err=<nil>)
	I0513 23:58:38.577813    4024 start.go:83] releasing machines lock for "multinode-101100-m02", held for 2m0.4981089s
	I0513 23:58:38.578634    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:40.480772    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:40.480772    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:40.480870    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:42.762757    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:42.763802    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:42.767982    4024 out.go:177] * Found network options:
	I0513 23:58:42.771218    4024 out.go:177]   - NO_PROXY=172.23.106.39
	W0513 23:58:42.773414    4024 proxy.go:119] fail to check proxy env: Error ip not in block
	I0513 23:58:42.775762    4024 out.go:177]   - NO_PROXY=172.23.106.39
	W0513 23:58:42.778065    4024 proxy.go:119] fail to check proxy env: Error ip not in block
	W0513 23:58:42.780965    4024 proxy.go:119] fail to check proxy env: Error ip not in block
	I0513 23:58:42.785025    4024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 23:58:42.785025    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:42.795562    4024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0513 23:58:42.795562    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0513 23:58:44.769172    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:44.769172    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:44.769172    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:44.771044    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:44.771255    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:44.771255    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0513 23:58:47.128963    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:47.128963    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:47.129014    4024 sshutil.go:53] new ssh client: &{IP:172.23.109.58 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0513 23:58:47.151825    4024 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0513 23:58:47.152224    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:47.152388    4024 sshutil.go:53] new ssh client: &{IP:172.23.109.58 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0513 23:58:47.354347    4024 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0513 23:58:47.354512    4024 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5691493s)
	I0513 23:58:47.354512    4024 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0513 23:58:47.354642    4024 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5588111s)
	W0513 23:58:47.354642    4024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 23:58:47.365871    4024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0513 23:58:47.392777    4024 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0513 23:58:47.392958    4024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 23:58:47.392958    4024 start.go:494] detecting cgroup driver to use...
	I0513 23:58:47.393153    4024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:58:47.425279    4024 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0513 23:58:47.437179    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0513 23:58:47.464440    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 23:58:47.485898    4024 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 23:58:47.494498    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 23:58:47.519627    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:58:47.544141    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 23:58:47.569144    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 23:58:47.598671    4024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 23:58:47.626124    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 23:58:47.650429    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 23:58:47.688329    4024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 23:58:47.718955    4024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 23:58:47.736412    4024 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0513 23:58:47.744784    4024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 23:58:47.778851    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:58:47.957896    4024 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 23:58:47.985730    4024 start.go:494] detecting cgroup driver to use...
	I0513 23:58:47.995163    4024 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 23:58:48.016170    4024 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0513 23:58:48.017551    4024 command_runner.go:130] > [Unit]
	I0513 23:58:48.017551    4024 command_runner.go:130] > Description=Docker Application Container Engine
	I0513 23:58:48.018096    4024 command_runner.go:130] > Documentation=https://docs.docker.com
	I0513 23:58:48.018096    4024 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0513 23:58:48.018096    4024 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0513 23:58:48.018225    4024 command_runner.go:130] > StartLimitBurst=3
	I0513 23:58:48.018225    4024 command_runner.go:130] > StartLimitIntervalSec=60
	I0513 23:58:48.018225    4024 command_runner.go:130] > [Service]
	I0513 23:58:48.018225    4024 command_runner.go:130] > Type=notify
	I0513 23:58:48.018225    4024 command_runner.go:130] > Restart=on-failure
	I0513 23:58:48.018308    4024 command_runner.go:130] > Environment=NO_PROXY=172.23.106.39
	I0513 23:58:48.018340    4024 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0513 23:58:48.018340    4024 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0513 23:58:48.018422    4024 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0513 23:58:48.018453    4024 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0513 23:58:48.018453    4024 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0513 23:58:48.018453    4024 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0513 23:58:48.018534    4024 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0513 23:58:48.018534    4024 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0513 23:58:48.018534    4024 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0513 23:58:48.018534    4024 command_runner.go:130] > ExecStart=
	I0513 23:58:48.018646    4024 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0513 23:58:48.018646    4024 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0513 23:58:48.018646    4024 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0513 23:58:48.018718    4024 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0513 23:58:48.018718    4024 command_runner.go:130] > LimitNOFILE=infinity
	I0513 23:58:48.018718    4024 command_runner.go:130] > LimitNPROC=infinity
	I0513 23:58:48.018718    4024 command_runner.go:130] > LimitCORE=infinity
	I0513 23:58:48.018810    4024 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0513 23:58:48.018810    4024 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0513 23:58:48.018961    4024 command_runner.go:130] > TasksMax=infinity
	I0513 23:58:48.018961    4024 command_runner.go:130] > TimeoutStartSec=0
	I0513 23:58:48.018961    4024 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0513 23:58:48.019057    4024 command_runner.go:130] > Delegate=yes
	I0513 23:58:48.019057    4024 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0513 23:58:48.019057    4024 command_runner.go:130] > KillMode=process
	I0513 23:58:48.019057    4024 command_runner.go:130] > [Install]
	I0513 23:58:48.019057    4024 command_runner.go:130] > WantedBy=multi-user.target
	I0513 23:58:48.027000    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:58:48.056751    4024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 23:58:48.094798    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 23:58:48.126902    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:58:48.159658    4024 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 23:58:48.221208    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 23:58:48.242714    4024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 23:58:48.271547    4024 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0513 23:58:48.278820    4024 ssh_runner.go:195] Run: which cri-dockerd
	I0513 23:58:48.284870    4024 command_runner.go:130] > /usr/bin/cri-dockerd
	I0513 23:58:48.296419    4024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 23:58:48.312164    4024 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 23:58:48.346240    4024 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 23:58:48.517709    4024 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 23:58:48.675647    4024 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 23:58:48.675812    4024 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 23:58:48.712662    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:58:48.875052    4024 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 23:58:51.361623    4024 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4864241s)
	I0513 23:58:51.373226    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 23:58:51.404177    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:58:51.435430    4024 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 23:58:51.605427    4024 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 23:58:51.778785    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:58:51.953173    4024 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 23:58:51.992285    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 23:58:52.020973    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:58:52.192737    4024 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 23:58:52.289887    4024 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 23:58:52.303108    4024 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 23:58:52.311102    4024 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0513 23:58:52.311102    4024 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0513 23:58:52.311178    4024 command_runner.go:130] > Device: 0,22	Inode: 888         Links: 1
	I0513 23:58:52.311178    4024 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0513 23:58:52.311178    4024 command_runner.go:130] > Access: 2024-05-13 23:58:52.440151920 +0000
	I0513 23:58:52.311178    4024 command_runner.go:130] > Modify: 2024-05-13 23:58:52.440151920 +0000
	I0513 23:58:52.311178    4024 command_runner.go:130] > Change: 2024-05-13 23:58:52.443152099 +0000
	I0513 23:58:52.311178    4024 command_runner.go:130] >  Birth: -
	I0513 23:58:52.311252    4024 start.go:562] Will wait 60s for crictl version
	I0513 23:58:52.322026    4024 ssh_runner.go:195] Run: which crictl
	I0513 23:58:52.327046    4024 command_runner.go:130] > /usr/bin/crictl
	I0513 23:58:52.335038    4024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 23:58:52.386392    4024 command_runner.go:130] > Version:  0.1.0
	I0513 23:58:52.386392    4024 command_runner.go:130] > RuntimeName:  docker
	I0513 23:58:52.386392    4024 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0513 23:58:52.386392    4024 command_runner.go:130] > RuntimeApiVersion:  v1
	I0513 23:58:52.388778    4024 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0513 23:58:52.395475    4024 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:58:52.422087    4024 command_runner.go:130] > 26.0.2
	I0513 23:58:52.430074    4024 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 23:58:52.458077    4024 command_runner.go:130] > 26.0.2
	I0513 23:58:52.460076    4024 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0513 23:58:52.464074    4024 out.go:177]   - env NO_PROXY=172.23.106.39
	I0513 23:58:52.467112    4024 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0513 23:58:52.470077    4024 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0513 23:58:52.470077    4024 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0513 23:58:52.470077    4024 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0513 23:58:52.470077    4024 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0513 23:58:52.472075    4024 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0513 23:58:52.472075    4024 ip.go:210] interface addr: 172.23.96.1/20
	I0513 23:58:52.480086    4024 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0513 23:58:52.486508    4024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:58:52.506732    4024 mustload.go:65] Loading cluster: multinode-101100
	I0513 23:58:52.507426    4024 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:58:52.508175    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:58:54.364986    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:54.364986    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:54.365058    4024 host.go:66] Checking if "multinode-101100" exists ...
	I0513 23:58:54.365602    4024 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100 for IP: 172.23.109.58
	I0513 23:58:54.365671    4024 certs.go:194] generating shared ca certs ...
	I0513 23:58:54.365671    4024 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 23:58:54.366165    4024 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0513 23:58:54.366400    4024 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0513 23:58:54.366590    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0513 23:58:54.366710    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0513 23:58:54.366860    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0513 23:58:54.366984    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0513 23:58:54.367337    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0513 23:58:54.367581    4024 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0513 23:58:54.367662    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0513 23:58:54.367847    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0513 23:58:54.368002    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0513 23:58:54.368126    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0513 23:58:54.368198    4024 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0513 23:58:54.368198    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:58:54.368198    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0513 23:58:54.368808    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0513 23:58:54.368958    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 23:58:54.413130    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0513 23:58:54.457921    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 23:58:54.505367    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0513 23:58:54.548015    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 23:58:54.587555    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0513 23:58:54.628747    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0513 23:58:54.677244    4024 ssh_runner.go:195] Run: openssl version
	I0513 23:58:54.685516    4024 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0513 23:58:54.693868    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0513 23:58:54.724902    4024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0513 23:58:54.731100    4024 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 23:58:54.731923    4024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0513 23:58:54.741751    4024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0513 23:58:54.751085    4024 command_runner.go:130] > 51391683
	I0513 23:58:54.762875    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0513 23:58:54.791632    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0513 23:58:54.818632    4024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0513 23:58:54.825268    4024 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 23:58:54.825268    4024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0513 23:58:54.834016    4024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0513 23:58:54.842309    4024 command_runner.go:130] > 3ec20f2e
	I0513 23:58:54.849292    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 23:58:54.879296    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 23:58:54.906273    4024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:58:54.913083    4024 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:58:54.913083    4024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:58:54.921161    4024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 23:58:54.927882    4024 command_runner.go:130] > b5213941
	I0513 23:58:54.937210    4024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 23:58:54.965278    4024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 23:58:54.970845    4024 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 23:58:54.971467    4024 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0513 23:58:54.971722    4024 kubeadm.go:928] updating node {m02 172.23.109.58 8443 v1.30.0 docker false true} ...
	I0513 23:58:54.971944    4024 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-101100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.109.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 23:58:54.984560    4024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0513 23:58:55.000305    4024 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	I0513 23:58:55.000680    4024 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0513 23:58:55.009442    4024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0513 23:58:55.027467    4024 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0513 23:58:55.027592    4024 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0513 23:58:55.027592    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0513 23:58:55.027592    4024 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0513 23:58:55.027794    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0513 23:58:55.043938    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:58:55.045003    4024 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0513 23:58:55.045583    4024 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0513 23:58:55.071325    4024 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0513 23:58:55.071405    4024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0513 23:58:55.071405    4024 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0513 23:58:55.071405    4024 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0513 23:58:55.071405    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0513 23:58:55.071405    4024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0513 23:58:55.071405    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0513 23:58:55.082101    4024 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0513 23:58:55.112170    4024 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0513 23:58:55.121412    4024 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0513 23:58:55.121533    4024 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0513 23:58:56.111863    4024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0513 23:58:56.128421    4024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0513 23:58:56.161229    4024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 23:58:56.199215    4024 ssh_runner.go:195] Run: grep 172.23.106.39	control-plane.minikube.internal$ /etc/hosts
	I0513 23:58:56.205505    4024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.106.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 23:58:56.235482    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:58:56.405951    4024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:58:56.432587    4024 host.go:66] Checking if "multinode-101100" exists ...
	I0513 23:58:56.433579    4024 start.go:316] joinCluster: &{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.106.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\j
enkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 23:58:56.433579    4024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0513 23:58:56.433579    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0513 23:58:58.343792    4024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:58:58.343792    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:58:58.343966    4024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0513 23:59:00.595223    4024 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0513 23:59:00.595223    4024 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:59:00.595981    4024 sshutil.go:53] new ssh client: &{IP:172.23.106.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0513 23:59:00.772194    4024 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 02cz9q.rotavztz6d240xog --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0513 23:59:00.773145    4024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3392544s)
	I0513 23:59:00.773223    4024 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0513 23:59:00.773223    4024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 02cz9q.rotavztz6d240xog --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-101100-m02"
	I0513 23:59:00.961219    4024 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0513 23:59:02.254022    4024 command_runner.go:130] > [preflight] Running pre-flight checks
	I0513 23:59:02.254022    4024 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0513 23:59:02.254213    4024 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0513 23:59:02.254213    4024 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 23:59:02.254261    4024 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 23:59:02.254261    4024 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0513 23:59:02.254261    4024 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0513 23:59:02.254261    4024 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001690428s
	I0513 23:59:02.254379    4024 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0513 23:59:02.254427    4024 command_runner.go:130] > This node has joined the cluster:
	I0513 23:59:02.254464    4024 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0513 23:59:02.254530    4024 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0513 23:59:02.254530    4024 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0513 23:59:02.254602    4024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 02cz9q.rotavztz6d240xog --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-101100-m02": (1.4812192s)
	I0513 23:59:02.254653    4024 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0513 23:59:02.458330    4024 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0513 23:59:02.639965    4024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-101100-m02 minikube.k8s.io/updated_at=2024_05_13T23_59_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=multinode-101100 minikube.k8s.io/primary=false
	I0513 23:59:02.758977    4024 command_runner.go:130] > node/multinode-101100-m02 labeled
	I0513 23:59:02.761209    4024 start.go:318] duration metric: took 6.3272563s to joinCluster
	I0513 23:59:02.761209    4024 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0513 23:59:02.765177    4024 out.go:177] * Verifying Kubernetes components...
	I0513 23:59:02.761867    4024 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:59:02.776996    4024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 23:59:02.963384    4024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 23:59:02.987788    4024 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 23:59:02.988510    4024 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.106.39:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 23:59:02.989071    4024 node_ready.go:35] waiting up to 6m0s for node "multinode-101100-m02" to be "Ready" ...
	I0513 23:59:02.989071    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:02.989071    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:02.989071    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:02.989071    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:02.999929    4024 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0513 23:59:03.000501    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:03.000501    4024 round_trippers.go:580]     Audit-Id: e0f51870-4b22-4f2a-98dd-9a4ef8661bfc
	I0513 23:59:03.000501    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:03.000501    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:03.000501    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:03.000501    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:03.000501    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:03.000501    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:03 GMT
	I0513 23:59:03.000615    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:03.491237    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:03.491311    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:03.491311    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:03.491311    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:03.498431    4024 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:59:03.498431    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:03.498431    4024 round_trippers.go:580]     Audit-Id: c9372fed-d71e-4251-afc4-7a545e961daf
	I0513 23:59:03.498586    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:03.498586    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:03.498586    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:03.498586    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:03.498586    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:03.498586    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:03 GMT
	I0513 23:59:03.498800    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:03.989452    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:03.989658    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:03.989658    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:03.989658    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:03.996911    4024 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:59:03.996911    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:03.996911    4024 round_trippers.go:580]     Audit-Id: a991c9e3-dde1-4fac-8bfb-93f0b6e3cee7
	I0513 23:59:03.996911    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:03.996911    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:03.996911    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:03.996911    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:03.997393    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:03.997393    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:04 GMT
	I0513 23:59:03.997548    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:04.503672    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:04.503672    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:04.503672    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:04.503672    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:04.506882    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:04.506882    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:04.506882    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:04.507498    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:04.507498    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:04.507498    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:04.507498    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:04 GMT
	I0513 23:59:04.507498    4024 round_trippers.go:580]     Audit-Id: 64ad7c36-5af2-476d-b693-64f29d4184d6
	I0513 23:59:04.507498    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:04.507560    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:05.000832    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:05.000902    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:05.000902    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:05.000997    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:05.007027    4024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:59:05.007566    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:05.007566    4024 round_trippers.go:580]     Audit-Id: ead25008-cf2f-49bf-8464-4332c71d4d3a
	I0513 23:59:05.007566    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:05.007566    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:05.007566    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:05.007566    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:05.007566    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:05.007566    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:05 GMT
	I0513 23:59:05.007810    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:05.007851    4024 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0513 23:59:05.498354    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:05.498354    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:05.498354    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:05.498354    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:05.502387    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:05.502387    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:05.502479    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:05.502479    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:05.502479    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:05.502479    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:05 GMT
	I0513 23:59:05.502479    4024 round_trippers.go:580]     Audit-Id: 997deb67-e495-4748-bd34-acd0bcb4a044
	I0513 23:59:05.502479    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:05.502479    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:05.502772    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:05.996327    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:05.996327    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:05.996327    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:05.996327    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:05.999600    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:05.999600    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:05.999600    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:05.999600    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:05.999600    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:05.999600    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:06 GMT
	I0513 23:59:05.999600    4024 round_trippers.go:580]     Audit-Id: 0e5261f1-c5bc-4759-ae35-dcaae1419869
	I0513 23:59:05.999600    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:05.999600    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:05.999600    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:06.498790    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:06.498790    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:06.498790    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:06.498790    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:06.600354    4024 round_trippers.go:574] Response Status: 200 OK in 101 milliseconds
	I0513 23:59:06.601257    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:06.601257    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:06.601257    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:06.601257    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:06.601373    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:06 GMT
	I0513 23:59:06.601373    4024 round_trippers.go:580]     Audit-Id: 2f17c8ff-e37b-46c3-8ca5-ce8cf112770b
	I0513 23:59:06.601373    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:06.601373    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:06.601499    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:06.997321    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:06.997321    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:06.997321    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:06.997321    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:07.000737    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:07.000737    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:07.000737    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:07.000737    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:07.000737    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:07.000737    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:07.000737    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:07 GMT
	I0513 23:59:07.000737    4024 round_trippers.go:580]     Audit-Id: 2482198a-1815-4206-8832-2498f6ec1333
	I0513 23:59:07.000737    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:07.001532    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:07.498171    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:07.498296    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:07.498296    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:07.498296    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:07.501869    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:07.501869    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:07.501869    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:07.501869    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:07.501869    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:07.501869    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:07.502004    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:07 GMT
	I0513 23:59:07.502082    4024 round_trippers.go:580]     Audit-Id: f8ea65c8-f6ff-4863-b32d-d283a2edb261
	I0513 23:59:07.502082    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:07.502336    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:07.502999    4024 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0513 23:59:07.999511    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:07.999579    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:07.999649    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:07.999649    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:08.003154    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:08.003447    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:08.003447    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:08.003447    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:08.003447    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:08.003447    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:08.003447    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:08 GMT
	I0513 23:59:08.003447    4024 round_trippers.go:580]     Audit-Id: 08e5e973-7b4c-4042-8b16-044b2f58f20e
	I0513 23:59:08.003557    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:08.003557    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:08.500821    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:08.500821    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:08.500821    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:08.500821    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:08.504479    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:08.504734    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:08.504734    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:08.504734    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:08.504734    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:08.504734    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:08.504734    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:08.504734    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:08 GMT
	I0513 23:59:08.504734    4024 round_trippers.go:580]     Audit-Id: a43ebb07-30b4-427b-8da6-b56edae8f944
	I0513 23:59:08.504959    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:08.992132    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:08.992188    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:08.992255    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:08.992255    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:08.995546    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:08.995546    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:08.995546    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:08.995546    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:08.995546    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:08.995546    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:08.995546    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:08.995546    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:09 GMT
	I0513 23:59:08.995546    4024 round_trippers.go:580]     Audit-Id: 41e31a62-4569-4c5d-af0c-8c25aecca648
	I0513 23:59:08.995546    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:09.498623    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:09.498623    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:09.498623    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:09.498623    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:09.506219    4024 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:59:09.507060    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:09.507060    4024 round_trippers.go:580]     Audit-Id: ad85dbac-832f-47fc-aa1a-0597f0eb74eb
	I0513 23:59:09.507060    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:09.507060    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:09.507060    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:09.507060    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:09.507060    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:09.507060    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:09 GMT
	I0513 23:59:09.507222    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:09.507622    4024 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0513 23:59:10.005033    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:10.005033    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:10.005033    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:10.005033    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:10.010162    4024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:59:10.010162    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:10.010162    4024 round_trippers.go:580]     Audit-Id: cd57a876-5183-438a-ad90-aa9bf811de1a
	I0513 23:59:10.010162    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:10.010162    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:10.010162    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:10.010162    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:10.010162    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:10.010162    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:10 GMT
	I0513 23:59:10.010438    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:10.498625    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:10.498680    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:10.498680    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:10.498680    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:10.502774    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:59:10.502844    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:10.502844    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:10.502844    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:10.502844    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:10.502844    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:10.502844    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:10.502844    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:10 GMT
	I0513 23:59:10.502844    4024 round_trippers.go:580]     Audit-Id: 7807c0d8-ac74-4526-b022-2d1a14dc9393
	I0513 23:59:10.502905    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:11.004283    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:11.004283    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:11.004283    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:11.004283    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:11.009269    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:59:11.009269    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:11.009269    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:11.009269    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:11.009269    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:11.009269    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:11.009269    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:11 GMT
	I0513 23:59:11.009269    4024 round_trippers.go:580]     Audit-Id: cc62a1be-7238-4ce8-a93d-39b9898cbe61
	I0513 23:59:11.009269    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:11.009269    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:11.496305    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:11.496305    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:11.496305    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:11.496305    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:11.499830    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:11.499872    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:11.499872    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:11.499872    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:11.499872    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:11.499872    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:11.499872    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:11 GMT
	I0513 23:59:11.499872    4024 round_trippers.go:580]     Audit-Id: bb91eb1d-3182-4f6a-91e5-8e17b48ad159
	I0513 23:59:11.499872    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:11.500017    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:11.998654    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:11.998908    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:11.998908    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:11.998908    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:12.002251    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:12.002668    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:12.002668    4024 round_trippers.go:580]     Audit-Id: 0245f730-cf0c-4546-97d9-b150be8fe025
	I0513 23:59:12.002668    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:12.002668    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:12.002668    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:12.002668    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:12.002747    4024 round_trippers.go:580]     Content-Length: 4029
	I0513 23:59:12.002747    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:12 GMT
	I0513 23:59:12.002990    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"595","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3005 chars]
	I0513 23:59:12.003188    4024 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0513 23:59:12.496471    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:12.496471    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:12.496471    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:12.496471    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:12.658521    4024 round_trippers.go:574] Response Status: 200 OK in 162 milliseconds
	I0513 23:59:12.658521    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:12.658521    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:12.658521    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:12.658521    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:12.658521    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:12 GMT
	I0513 23:59:12.658521    4024 round_trippers.go:580]     Audit-Id: 9531a8dc-22b2-439b-bfb0-d8a4f4f40e03
	I0513 23:59:12.658521    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:12.659181    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:12.998928    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:12.999119    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:12.999119    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:12.999119    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:13.003667    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:59:13.003667    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:13.003667    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:13.003667    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:13.003667    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:13 GMT
	I0513 23:59:13.003667    4024 round_trippers.go:580]     Audit-Id: 0a65feb5-a4e9-4e8e-b9e0-7e0ef7a8d914
	I0513 23:59:13.003667    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:13.003667    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:13.003667    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:13.505472    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:13.505472    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:13.505472    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:13.505472    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:13.508514    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:13.508514    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:13.508514    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:13.508514    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:13.508514    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:13 GMT
	I0513 23:59:13.508514    4024 round_trippers.go:580]     Audit-Id: 35fd9767-cfb2-4276-8bf3-85af070abce4
	I0513 23:59:13.508514    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:13.508514    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:13.509160    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:13.995471    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:13.995471    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:13.995471    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:13.995715    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:13.999207    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:13.999207    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:13.999207    4024 round_trippers.go:580]     Audit-Id: b1328306-d2fe-4589-9dc3-2e1f607b706b
	I0513 23:59:13.999207    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:13.999207    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:13.999207    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:13.999207    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:13.999207    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:14 GMT
	I0513 23:59:13.999535    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:14.503634    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:14.503691    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:14.503691    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:14.503691    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:14.507083    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:14.507083    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:14.507157    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:14 GMT
	I0513 23:59:14.507157    4024 round_trippers.go:580]     Audit-Id: 624ab340-1b54-4f03-ba10-75be6e4a655d
	I0513 23:59:14.507157    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:14.507157    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:14.507157    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:14.507157    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:14.508253    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:14.508733    4024 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0513 23:59:14.997471    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:14.997471    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:14.997471    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:14.997471    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:15.001758    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:59:15.001818    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:15.001818    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:15.001818    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:15 GMT
	I0513 23:59:15.001818    4024 round_trippers.go:580]     Audit-Id: 6ced8fbe-477e-44f7-bf4d-84207d3b6da0
	I0513 23:59:15.001818    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:15.001818    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:15.001818    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:15.001818    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:15.504474    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:15.504474    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:15.504474    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:15.504474    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:15.509018    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:59:15.509092    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:15.509092    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:15.509092    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:15.509092    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:15.509092    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:15.509092    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:15 GMT
	I0513 23:59:15.509092    4024 round_trippers.go:580]     Audit-Id: d2004846-2dfe-43ed-8345-d4f2f07de2f0
	I0513 23:59:15.509785    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:15.997140    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:15.997140    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:15.997140    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:15.997140    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:16.002285    4024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:59:16.002285    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:16.002285    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:16 GMT
	I0513 23:59:16.002285    4024 round_trippers.go:580]     Audit-Id: d171f2c6-fa26-4d9c-a20d-e452afc67e71
	I0513 23:59:16.002285    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:16.002285    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:16.002285    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:16.002285    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:16.002285    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:16.503995    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:16.503995    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:16.504051    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:16.504051    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:16.509680    4024 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0513 23:59:16.509680    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:16.509680    4024 round_trippers.go:580]     Audit-Id: 43d48079-d593-4596-9702-a4d0fde67eb4
	I0513 23:59:16.509680    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:16.509680    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:16.509680    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:16.509680    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:16.509680    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:16 GMT
	I0513 23:59:16.510621    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:16.511021    4024 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0513 23:59:16.990138    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:16.990138    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:16.990138    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:16.990138    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:17.098966    4024 round_trippers.go:574] Response Status: 200 OK in 108 milliseconds
	I0513 23:59:17.098966    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:17.098966    4024 round_trippers.go:580]     Audit-Id: b6ddba3d-f6d7-4941-a816-fa767728f920
	I0513 23:59:17.099112    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:17.099112    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:17.099112    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:17.099112    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:17.099112    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:17 GMT
	I0513 23:59:17.100764    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:17.493363    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:17.493472    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:17.493472    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:17.493472    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:17.497302    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:17.497302    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:17.497302    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:17.497302    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:17.497302    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:17.497302    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:17.497302    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:17 GMT
	I0513 23:59:17.497302    4024 round_trippers.go:580]     Audit-Id: 964941b6-81ca-4269-95d6-283983a7a841
	I0513 23:59:17.497302    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:17.993507    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:17.993644    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:17.993644    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:17.993725    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:17.997040    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:17.997040    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:17.997040    4024 round_trippers.go:580]     Audit-Id: a7b37ba1-5717-4e36-a11e-09b5abf9f878
	I0513 23:59:17.997040    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:17.997040    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:17.997040    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:17.997040    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:17.997040    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:18 GMT
	I0513 23:59:17.997258    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:18.496583    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:18.496678    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:18.496678    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:18.496678    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:18.502856    4024 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0513 23:59:18.502856    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:18.502856    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:18 GMT
	I0513 23:59:18.502856    4024 round_trippers.go:580]     Audit-Id: f5f76703-f017-43d7-a38e-3ab66d49c109
	I0513 23:59:18.502856    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:18.502856    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:18.502856    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:18.502856    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:18.503520    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:18.994917    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:18.994917    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:18.995017    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:18.995017    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:18.997985    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:18.998258    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:18.998258    4024 round_trippers.go:580]     Audit-Id: 9bc193bb-af2d-48c2-9de2-50ccd2fa19c8
	I0513 23:59:18.998258    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:18.998336    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:18.998336    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:18.998336    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:18.998336    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:19 GMT
	I0513 23:59:18.998614    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:18.999237    4024 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0513 23:59:19.495518    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:19.495518    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:19.495518    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:19.495518    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:19.499081    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:19.499081    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:19.499311    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:19.499311    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:19.499311    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:19.499311    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:19 GMT
	I0513 23:59:19.499311    4024 round_trippers.go:580]     Audit-Id: 5a6c8370-b424-403a-9b04-22ed93e68883
	I0513 23:59:19.499311    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:19.499515    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:19.994130    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:19.994130    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:19.994130    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:19.994130    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:19.997690    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:19.997690    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:19.998397    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:19.998397    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:19.998397    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:20 GMT
	I0513 23:59:19.998397    4024 round_trippers.go:580]     Audit-Id: e273757b-cb43-4c65-ade8-29c234b1a136
	I0513 23:59:19.998397    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:19.998397    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:19.998582    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:20.492926    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:20.492926    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:20.492926    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:20.493014    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:20.496878    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:20.496878    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:20.496878    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:20.496878    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:20 GMT
	I0513 23:59:20.496878    4024 round_trippers.go:580]     Audit-Id: 526d4dc2-1dac-4979-93e8-2b243b333890
	I0513 23:59:20.496878    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:20.496878    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:20.496878    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:20.496878    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:21.005146    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:21.005182    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:21.005182    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:21.005182    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:21.008475    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:21.008475    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:21.009228    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:21.009228    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:21.009228    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:21.009228    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:21 GMT
	I0513 23:59:21.009228    4024 round_trippers.go:580]     Audit-Id: 8c7bd4e7-1b9f-4e36-90a6-61694cb6d02c
	I0513 23:59:21.009228    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:21.009591    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:21.010100    4024 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0513 23:59:21.503200    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:21.503200    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:21.503200    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:21.503200    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:21.507479    4024 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0513 23:59:21.507479    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:21.507479    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:21.507479    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:21.507479    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:21 GMT
	I0513 23:59:21.507479    4024 round_trippers.go:580]     Audit-Id: c57d3992-3e1e-47b0-9b8c-4e4ac5d2492b
	I0513 23:59:21.507479    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:21.507901    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:21.508131    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:22.001031    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:22.001031    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.001031    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.001031    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.004203    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:22.005193    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.005193    4024 round_trippers.go:580]     Audit-Id: a89212e7-e8a7-43e5-9d92-5a0e7e020e04
	I0513 23:59:22.005193    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.005193    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.005193    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.005193    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.005193    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.005484    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"609","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3397 chars]
	I0513 23:59:22.504363    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:22.504468    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.504468    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.504468    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.507897    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:22.507897    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.507897    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.507897    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.507897    4024 round_trippers.go:580]     Audit-Id: 04c5e2c0-b681-4804-888a-4f985b4758e9
	I0513 23:59:22.507897    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.507897    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.507897    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.508760    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"632","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3143 chars]
	I0513 23:59:22.509109    4024 node_ready.go:49] node "multinode-101100-m02" has status "Ready":"True"
	I0513 23:59:22.509109    4024 node_ready.go:38] duration metric: took 19.5188818s for node "multinode-101100-m02" to be "Ready" ...
	I0513 23:59:22.509224    4024 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:59:22.509329    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods
	I0513 23:59:22.509329    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.509329    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.509329    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.517381    4024 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0513 23:59:22.517381    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.517381    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.517381    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.517381    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.517381    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.517381    4024 round_trippers.go:580]     Audit-Id: 3f063e56-6d2e-4540-9b2b-7e73442cea96
	I0513 23:59:22.517381    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.518288    4024 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"632"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"442","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70428 chars]
	I0513 23:59:22.520997    4024 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:22.520997    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0513 23:59:22.520997    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.520997    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.521534    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.523782    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:22.523782    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.523782    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.523782    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.523782    4024 round_trippers.go:580]     Audit-Id: 8d0c755e-a427-4e9a-ac30-42ebac757a7a
	I0513 23:59:22.523782    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.523782    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.523782    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.524422    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"442","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0513 23:59:22.524997    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:59:22.524997    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.525057    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.525057    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.526232    4024 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 23:59:22.527252    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.527252    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.527252    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.527252    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.527252    4024 round_trippers.go:580]     Audit-Id: 16fbb71a-ba69-4fff-8918-54b56c469a04
	I0513 23:59:22.527252    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.527252    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.527535    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"452","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0513 23:59:22.527535    4024 pod_ready.go:92] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"True"
	I0513 23:59:22.527535    4024 pod_ready.go:81] duration metric: took 6.5375ms for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:22.527535    4024 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:22.527535    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-101100
	I0513 23:59:22.527535    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.527535    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.527535    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.530121    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:22.531123    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.531123    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.531123    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.531123    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.531123    4024 round_trippers.go:580]     Audit-Id: ec110b22-7126-4508-9e84-5a1fca5b9922
	I0513 23:59:22.531123    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.531123    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.531396    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"cd31d030-75f8-4abb-bcad-34031cec7aa6","resourceVersion":"328","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.106.39:2379","kubernetes.io/config.hash":"1af4b764a5249ff25d3c1c709387c273","kubernetes.io/config.mirror":"1af4b764a5249ff25d3c1c709387c273","kubernetes.io/config.seen":"2024-05-13T23:56:09.392109641Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0513 23:59:22.532180    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:59:22.532247    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.532247    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.532247    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.534481    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:22.534481    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.534481    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.534481    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.534481    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.534481    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.534481    4024 round_trippers.go:580]     Audit-Id: c5e53928-6494-49c5-b905-e1e8a8731dbe
	I0513 23:59:22.534481    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.535443    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"452","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0513 23:59:22.535793    4024 pod_ready.go:92] pod "etcd-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:59:22.535851    4024 pod_ready.go:81] duration metric: took 8.3157ms for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:22.535851    4024 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:22.535903    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-101100
	I0513 23:59:22.535967    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.535967    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.535967    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.538249    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:22.538249    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.538249    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.538249    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.538249    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.538249    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.538249    4024 round_trippers.go:580]     Audit-Id: 90350be4-c0d4-410e-ac98-b2588c4383e1
	I0513 23:59:22.538590    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.538722    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-101100","namespace":"kube-system","uid":"1d9c79a4-1e4a-46fb-b3e8-02a4775f40af","resourceVersion":"312","creationTimestamp":"2024-05-13T23:56:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.106.39:8443","kubernetes.io/config.hash":"03d9b35578220c9e99f77722d9aa294f","kubernetes.io/config.mirror":"03d9b35578220c9e99f77722d9aa294f","kubernetes.io/config.seen":"2024-05-13T23:56:02.155854146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0513 23:59:22.539298    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:59:22.539298    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.539298    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.539298    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.541313    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:22.541313    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.541313    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.541313    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.541313    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.541313    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.541313    4024 round_trippers.go:580]     Audit-Id: 2144f504-fda8-4c30-a5d7-7fb1c0108cb2
	I0513 23:59:22.541313    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.542372    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"452","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0513 23:59:22.542536    4024 pod_ready.go:92] pod "kube-apiserver-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:59:22.542536    4024 pod_ready.go:81] duration metric: took 6.684ms for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:22.542536    4024 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:22.542536    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-101100
	I0513 23:59:22.542536    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.542536    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.542536    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.545172    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:22.545172    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.545172    4024 round_trippers.go:580]     Audit-Id: df76a2ac-8cd2-4808-98f0-71573753d8ef
	I0513 23:59:22.545172    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.545172    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.545172    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.545172    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.545172    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.545911    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-101100","namespace":"kube-system","uid":"1a74381a-7477-4fd3-b344-c4a230014f97","resourceVersion":"308","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.mirror":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.seen":"2024-05-13T23:56:09.392106640Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0513 23:59:22.545911    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:59:22.545911    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.545911    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.545911    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.547670    4024 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0513 23:59:22.548665    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.548665    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.548665    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.548665    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.548665    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.548665    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.548665    4024 round_trippers.go:580]     Audit-Id: bb51be0f-f899-4ff1-b195-3568c0b840ee
	I0513 23:59:22.548790    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"452","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0513 23:59:22.548790    4024 pod_ready.go:92] pod "kube-controller-manager-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:59:22.548790    4024 pod_ready.go:81] duration metric: took 6.2536ms for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:22.548790    4024 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:22.707500    4024 request.go:629] Waited for 158.7013ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0513 23:59:22.708059    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0513 23:59:22.708125    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.708125    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.708125    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.711824    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:22.711891    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.711891    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.711891    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.711891    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.711891    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:22 GMT
	I0513 23:59:22.711891    4024 round_trippers.go:580]     Audit-Id: ec4c8da7-7b2c-4457-9eeb-6ba45cf793ee
	I0513 23:59:22.711891    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.712292    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-b25hq","generateName":"kube-proxy-","namespace":"kube-system","uid":"d39f5818-3e88-4162-a7ce-734ca28103bf","resourceVersion":"615","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0513 23:59:22.909756    4024 request.go:629] Waited for 196.583ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:22.909950    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100-m02
	I0513 23:59:22.909950    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:22.909950    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:22.910045    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:22.913293    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:22.913293    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:22.913293    4024 round_trippers.go:580]     Audit-Id: db842993-fa41-4243-a7ee-faffe37fdd20
	I0513 23:59:22.913293    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:22.913293    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:22.913293    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:22.913293    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:22.913293    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:23 GMT
	I0513 23:59:22.913672    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"632","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3143 chars]
	I0513 23:59:22.913835    4024 pod_ready.go:92] pod "kube-proxy-b25hq" in "kube-system" namespace has status "Ready":"True"
	I0513 23:59:22.913835    4024 pod_ready.go:81] duration metric: took 365.0238ms for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:22.913835    4024 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:23.111213    4024 request.go:629] Waited for 197.3664ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0513 23:59:23.111667    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0513 23:59:23.111667    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:23.111667    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:23.111870    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:23.114168    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:23.115202    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:23.115235    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:23.115235    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:23 GMT
	I0513 23:59:23.115235    4024 round_trippers.go:580]     Audit-Id: cec55591-3be4-40f7-8941-390a8689f1b3
	I0513 23:59:23.115235    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:23.115235    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:23.115235    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:23.115271    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zhcz6","generateName":"kube-proxy-","namespace":"kube-system","uid":"a9a488af-41ba-47f3-87b0-5a2f062afad6","resourceVersion":"403","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0513 23:59:23.312855    4024 request.go:629] Waited for 196.8477ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:59:23.312855    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:59:23.312855    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:23.312855    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:23.312855    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:23.315618    4024 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0513 23:59:23.315618    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:23.315618    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:23.315618    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:23.315618    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:23.315618    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:23 GMT
	I0513 23:59:23.315618    4024 round_trippers.go:580]     Audit-Id: 2f56ca31-afb5-4491-8e27-435b12de1148
	I0513 23:59:23.315618    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:23.316713    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"452","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0513 23:59:23.317187    4024 pod_ready.go:92] pod "kube-proxy-zhcz6" in "kube-system" namespace has status "Ready":"True"
	I0513 23:59:23.317187    4024 pod_ready.go:81] duration metric: took 403.3277ms for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:23.317187    4024 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:23.516255    4024 request.go:629] Waited for 198.6236ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0513 23:59:23.516661    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0513 23:59:23.516766    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:23.516766    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:23.516766    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:23.519915    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:23.520269    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:23.520269    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:23 GMT
	I0513 23:59:23.520269    4024 round_trippers.go:580]     Audit-Id: 10ffd413-321f-4103-b204-21902bbf0de0
	I0513 23:59:23.520269    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:23.520269    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:23.520269    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:23.520388    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:23.520671    4024 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-101100","namespace":"kube-system","uid":"d7300c2d-377f-4061-bd34-5f7593b7e827","resourceVersion":"306","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.mirror":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.seen":"2024-05-13T23:56:09.392108241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0513 23:59:23.718699    4024 request.go:629] Waited for 197.1499ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:59:23.719105    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes/multinode-101100
	I0513 23:59:23.719398    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:23.719508    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:23.719508    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:23.726765    4024 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0513 23:59:23.726765    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:23.726765    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:23.726765    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:23 GMT
	I0513 23:59:23.726765    4024 round_trippers.go:580]     Audit-Id: 9888ae58-7616-48f7-bdef-801ee5085ab7
	I0513 23:59:23.726765    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:23.726765    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:23.726765    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:23.726765    4024 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"452","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0513 23:59:23.727555    4024 pod_ready.go:92] pod "kube-scheduler-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0513 23:59:23.727555    4024 pod_ready.go:81] duration metric: took 410.3435ms for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0513 23:59:23.727555    4024 pod_ready.go:38] duration metric: took 1.2182579s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0513 23:59:23.727555    4024 system_svc.go:44] waiting for kubelet service to be running ....
	I0513 23:59:23.736659    4024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:59:23.760824    4024 system_svc.go:56] duration metric: took 33.2675ms WaitForService to wait for kubelet
	I0513 23:59:23.760824    4024 kubeadm.go:576] duration metric: took 20.9983704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 23:59:23.760925    4024 node_conditions.go:102] verifying NodePressure condition ...
	I0513 23:59:23.906581    4024 request.go:629] Waited for 145.647ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.106.39:8443/api/v1/nodes
	I0513 23:59:23.906734    4024 round_trippers.go:463] GET https://172.23.106.39:8443/api/v1/nodes
	I0513 23:59:23.906734    4024 round_trippers.go:469] Request Headers:
	I0513 23:59:23.906734    4024 round_trippers.go:473]     Accept: application/json, */*
	I0513 23:59:23.906734    4024 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0513 23:59:23.909980    4024 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0513 23:59:23.909980    4024 round_trippers.go:577] Response Headers:
	I0513 23:59:23.909980    4024 round_trippers.go:580]     Date: Mon, 13 May 2024 23:59:24 GMT
	I0513 23:59:23.909980    4024 round_trippers.go:580]     Audit-Id: 3704a3b9-76ab-44c2-ad35-fe5f95a6cf05
	I0513 23:59:23.909980    4024 round_trippers.go:580]     Cache-Control: no-cache, private
	I0513 23:59:23.909980    4024 round_trippers.go:580]     Content-Type: application/json
	I0513 23:59:23.909980    4024 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0513 23:59:23.909980    4024 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0513 23:59:23.911268    4024 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"635"},"items":[{"metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"452","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9147 chars]
	I0513 23:59:23.912277    4024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:59:23.912351    4024 node_conditions.go:123] node cpu capacity is 2
	I0513 23:59:23.912351    4024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0513 23:59:23.912351    4024 node_conditions.go:123] node cpu capacity is 2
	I0513 23:59:23.912457    4024 node_conditions.go:105] duration metric: took 151.5231ms to run NodePressure ...
	I0513 23:59:23.912457    4024 start.go:240] waiting for startup goroutines ...
	I0513 23:59:23.912555    4024 start.go:254] writing updated cluster config ...
	I0513 23:59:23.924083    4024 ssh_runner.go:195] Run: rm -f paused
	I0513 23:59:24.047063    4024 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0513 23:59:24.052195    4024 out.go:177] * Done! kubectl is now configured to use "multinode-101100" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.036170990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.068192914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.068516734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.068545936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.068800552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:56:36 multinode-101100 cri-dockerd[1220]: time="2024-05-13T23:56:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f7c140951f4f8270da243f55135e9f108f3cdf5ef11a4e990e06822ace5adbd/resolv.conf as [nameserver 172.23.96.1]"
	May 13 23:56:36 multinode-101100 cri-dockerd[1220]: time="2024-05-13T23:56:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697/resolv.conf as [nameserver 172.23.96.1]"
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.412615584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.412843198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.413059611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.413716451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.508943128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.509095137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.509326651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:56:36 multinode-101100 dockerd[1320]: time="2024-05-13T23:56:36.509690773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:59:46 multinode-101100 dockerd[1320]: time="2024-05-13T23:59:46.302730722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 23:59:46 multinode-101100 dockerd[1320]: time="2024-05-13T23:59:46.302911449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 23:59:46 multinode-101100 dockerd[1320]: time="2024-05-13T23:59:46.302933652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:59:46 multinode-101100 dockerd[1320]: time="2024-05-13T23:59:46.303032067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:59:46 multinode-101100 cri-dockerd[1220]: time="2024-05-13T23:59:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 13 23:59:47 multinode-101100 cri-dockerd[1220]: time="2024-05-13T23:59:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 13 23:59:47 multinode-101100 dockerd[1320]: time="2024-05-13T23:59:47.769807164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 13 23:59:47 multinode-101100 dockerd[1320]: time="2024-05-13T23:59:47.769956667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 13 23:59:47 multinode-101100 dockerd[1320]: time="2024-05-13T23:59:47.769971967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 13 23:59:47 multinode-101100 dockerd[1320]: time="2024-05-13T23:59:47.770075569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	57dea5416eb67       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   46 seconds ago      Running             busybox                   0                   76d1b8ce19aba       busybox-fc5497c4f-xqj6w
	76c5ab7859eff       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   0                   8bb49b28c842a       coredns-7db6d8ff4d-4kmx4
	e6ee22ee5c1b8       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       0                   8f7c140951f4f       storage-provisioner
	9c4eb727cedb6       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   90d7537422a83       kindnet-9q2tv
	91edaaa00da23       a0bf559e280cf                                                                                         4 minutes ago       Running             kube-proxy                0                   9bd694480978f       kube-proxy-zhcz6
	eda79d47d28ff       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   287e744a4dc2e       etcd-multinode-101100
	e96f94398d6dd       c7aad43836fa5                                                                                         4 minutes ago       Running             kube-controller-manager   0                   da9268fd6556b       kube-controller-manager-multinode-101100
	964887fc5d362       259c8277fcbbc                                                                                         4 minutes ago       Running             kube-scheduler            0                   fcb3b27edcd2a       kube-scheduler-multinode-101100
	06f1a683cad83       c42f13656d0b2                                                                                         4 minutes ago       Running             kube-apiserver            0                   ad0550a5dabf1       kube-apiserver-multinode-101100
	
	
	==> coredns [76c5ab7859ef] <==
	[INFO] 10.244.1.2:36311 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110402s
	[INFO] 10.244.0.3:43910 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301006s
	[INFO] 10.244.0.3:52495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000145803s
	[INFO] 10.244.0.3:46357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066702s
	[INFO] 10.244.0.3:41390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062301s
	[INFO] 10.244.0.3:35739 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084301s
	[INFO] 10.244.0.3:44800 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163303s
	[INFO] 10.244.0.3:57631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068702s
	[INFO] 10.244.0.3:50842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135702s
	[INFO] 10.244.1.2:41210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204604s
	[INFO] 10.244.1.2:57858 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073801s
	[INFO] 10.244.1.2:48782 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152303s
	[INFO] 10.244.1.2:36081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121002s
	[INFO] 10.244.0.3:46909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115002s
	[INFO] 10.244.0.3:36030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220205s
	[INFO] 10.244.0.3:56187 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059401s
	[INFO] 10.244.0.3:51500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099802s
	[INFO] 10.244.1.2:57247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147903s
	[INFO] 10.244.1.2:46132 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170203s
	[INFO] 10.244.1.2:57206 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000452309s
	[INFO] 10.244.1.2:44795 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000146203s
	[INFO] 10.244.0.3:33385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082102s
	[INFO] 10.244.0.3:56742 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173704s
	[INFO] 10.244.0.3:46927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185904s
	[INFO] 10.244.0.3:42956 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000054801s
	
	
	==> describe nodes <==
	Name:               multinode-101100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=multinode-101100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_13T23_56_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 23:56:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101100
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 May 2024 00:00:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 May 2024 00:00:14 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 May 2024 00:00:14 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 May 2024 00:00:14 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 May 2024 00:00:14 +0000   Mon, 13 May 2024 23:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.106.39
	  Hostname:    multinode-101100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 27fd1bbb8a204b01b58052f9ad09fad3
	  System UUID:                9b23fe4d-6d34-444b-8185-a84d51d23610
	  Boot ID:                    561f0484-2e58-4bed-919e-5b67a5410789
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xqj6w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 coredns-7db6d8ff4d-4kmx4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m10s
	  kube-system                 etcd-multinode-101100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m24s
	  kube-system                 kindnet-9q2tv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m10s
	  kube-system                 kube-apiserver-multinode-101100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-controller-manager-multinode-101100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-proxy-zhcz6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-scheduler-multinode-101100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m31s (x8 over 4m31s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s (x8 over 4m31s)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s (x7 over 4m31s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m24s                  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s                  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s                  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m11s                  node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	  Normal  NodeReady                3m58s                  kubelet          Node multinode-101100 status is now: NodeReady
	
	
	Name:               multinode-101100-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=multinode-101100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_13T23_59_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 23:59:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 May 2024 00:00:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 May 2024 00:00:03 +0000   Mon, 13 May 2024 23:59:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 May 2024 00:00:03 +0000   Mon, 13 May 2024 23:59:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 May 2024 00:00:03 +0000   Mon, 13 May 2024 23:59:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 May 2024 00:00:03 +0000   Mon, 13 May 2024 23:59:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.109.58
	  Hostname:    multinode-101100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d348bb1bbc048f4b99c681873b42d63
	  System UUID:                4330851b-5248-f245-9378-5fc25e670b55
	  Boot ID:                    9f102be6-1468-4570-8696-97e5ce51649a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q7442    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-2lwsm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      91s
	  kube-system                 kube-proxy-b25hq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  RegisteredNode           91s                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	  Normal  NodeHasSufficientMemory  91s (x2 over 91s)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s (x2 over 91s)  kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x2 over 91s)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                71s                kubelet          Node multinode-101100-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May13 23:55] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.166749] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[ +27.594794] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +0.094453] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.459534] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	[  +0.181177] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.198598] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[  +2.731911] systemd-fstab-generator[1173]: Ignoring "noauto" option for root device
	[  +0.180860] systemd-fstab-generator[1185]: Ignoring "noauto" option for root device
	[  +0.169672] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.262481] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[ +11.348582] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.102335] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.683124] systemd-fstab-generator[1502]: Ignoring "noauto" option for root device
	[May13 23:56] systemd-fstab-generator[1696]: Ignoring "noauto" option for root device
	[  +0.090270] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.518822] systemd-fstab-generator[2100]: Ignoring "noauto" option for root device
	[  +0.123867] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.535802] systemd-fstab-generator[2299]: Ignoring "noauto" option for root device
	[  +0.203687] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.391672] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.113857] hrtimer: interrupt took 3132020 ns
	[May13 23:59] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [eda79d47d28f] <==
	{"level":"info","ts":"2024-05-13T23:56:04.46196Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-13T23:56:04.461378Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.106.39:2379"}
	{"level":"info","ts":"2024-05-13T23:56:04.462449Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T23:56:04.465167Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T23:56:04.468504Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-13T23:56:04.46971Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-13T23:56:30.006391Z","caller":"traceutil/trace.go:171","msg":"trace[50457917] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"166.27832ms","start":"2024-05-13T23:56:29.840098Z","end":"2024-05-13T23:56:30.006377Z","steps":["trace[50457917] 'process raft request'  (duration: 165.793986ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:56:46.902109Z","caller":"traceutil/trace.go:171","msg":"trace[2067011947] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"155.824752ms","start":"2024-05-13T23:56:46.74627Z","end":"2024-05-13T23:56:46.902094Z","steps":["trace[2067011947] 'process raft request'  (duration: 155.700149ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:58:56.367127Z","caller":"traceutil/trace.go:171","msg":"trace[1001008086] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"279.167644ms","start":"2024-05-13T23:58:56.087943Z","end":"2024-05-13T23:58:56.36711Z","steps":["trace[1001008086] 'process raft request'  (duration: 279.035513ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:58:56.706627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.461353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-13T23:58:56.706858Z","caller":"traceutil/trace.go:171","msg":"trace[1889216291] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:560; }","duration":"138.713514ms","start":"2024-05-13T23:58:56.56809Z","end":"2024-05-13T23:58:56.706803Z","steps":["trace[1889216291] 'range keys from in-memory index tree'  (duration: 138.315718ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:59:06.561675Z","caller":"traceutil/trace.go:171","msg":"trace[1873962194] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"131.435141ms","start":"2024-05-13T23:59:06.430222Z","end":"2024-05-13T23:59:06.561658Z","steps":["trace[1873962194] 'process raft request'  (duration: 131.153981ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:59:12.858366Z","caller":"traceutil/trace.go:171","msg":"trace[301496152] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"260.692328ms","start":"2024-05-13T23:59:12.59766Z","end":"2024-05-13T23:59:12.858352Z","steps":["trace[301496152] 'process raft request'  (duration: 260.585606ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:59:12.858441Z","caller":"traceutil/trace.go:171","msg":"trace[742631112] linearizableReadLoop","detail":"{readStateIndex:658; appliedIndex:658; }","duration":"256.783839ms","start":"2024-05-13T23:59:12.601643Z","end":"2024-05-13T23:59:12.858427Z","steps":["trace[742631112] 'read index received'  (duration: 256.777938ms)","trace[742631112] 'applied index is now lower than readState.Index'  (duration: 4.701µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-13T23:59:12.858612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.953173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-05-13T23:59:12.860213Z","caller":"traceutil/trace.go:171","msg":"trace[154749059] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:609; }","duration":"258.591004ms","start":"2024-05-13T23:59:12.601606Z","end":"2024-05-13T23:59:12.860197Z","steps":["trace[154749059] 'agreement among raft nodes before linearized reading'  (duration: 256.889961ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:59:12.871931Z","caller":"traceutil/trace.go:171","msg":"trace[849318068] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"124.169167ms","start":"2024-05-13T23:59:12.747752Z","end":"2024-05-13T23:59:12.871922Z","steps":["trace[849318068] 'process raft request'  (duration: 124.010835ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:59:12.872347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.700938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-101100-m02\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-05-13T23:59:12.87243Z","caller":"traceutil/trace.go:171","msg":"trace[1694491582] range","detail":"{range_begin:/registry/minions/multinode-101100-m02; range_end:; response_count:1; response_revision:610; }","duration":"158.875274ms","start":"2024-05-13T23:59:12.713545Z","end":"2024-05-13T23:59:12.87242Z","steps":["trace[1694491582] 'agreement among raft nodes before linearized reading'  (duration: 158.645027ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-13T23:59:17.153915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.674963ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-05-13T23:59:17.154695Z","caller":"traceutil/trace.go:171","msg":"trace[2016119975] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:619; }","duration":"258.481717ms","start":"2024-05-13T23:59:16.896191Z","end":"2024-05-13T23:59:17.154673Z","steps":["trace[2016119975] 'range keys from in-memory index tree'  (duration: 257.463322ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:59:17.310363Z","caller":"traceutil/trace.go:171","msg":"trace[1813211530] linearizableReadLoop","detail":"{readStateIndex:671; appliedIndex:670; }","duration":"102.176013ms","start":"2024-05-13T23:59:17.208173Z","end":"2024-05-13T23:59:17.310349Z","steps":["trace[1813211530] 'read index received'  (duration: 101.827546ms)","trace[1813211530] 'applied index is now lower than readState.Index'  (duration: 347.867µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-13T23:59:17.311042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.740521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-101100-m02\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-05-13T23:59:17.311161Z","caller":"traceutil/trace.go:171","msg":"trace[1962437537] range","detail":"{range_begin:/registry/minions/multinode-101100-m02; range_end:; response_count:1; response_revision:620; }","duration":"103.017274ms","start":"2024-05-13T23:59:17.20813Z","end":"2024-05-13T23:59:17.311147Z","steps":["trace[1962437537] 'agreement among raft nodes before linearized reading'  (duration: 102.501775ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-13T23:59:17.312314Z","caller":"traceutil/trace.go:171","msg":"trace[1626502975] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"152.192914ms","start":"2024-05-13T23:59:17.159997Z","end":"2024-05-13T23:59:17.31219Z","steps":["trace[1626502975] 'process raft request'  (duration: 150.118116ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:00:33 up 6 min,  0 users,  load average: 0.13, 0.26, 0.15
	Linux multinode-101100 5.10.207 #1 SMP Thu May 9 02:07:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9c4eb727cedb] <==
	I0513 23:59:31.456087       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0513 23:59:41.464072       1 main.go:223] Handling node with IPs: map[172.23.106.39:{}]
	I0513 23:59:41.464098       1 main.go:227] handling current node
	I0513 23:59:41.464172       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0513 23:59:41.464183       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0513 23:59:51.472325       1 main.go:223] Handling node with IPs: map[172.23.106.39:{}]
	I0513 23:59:51.472430       1 main.go:227] handling current node
	I0513 23:59:51.472443       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0513 23:59:51.472451       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:00:01.480547       1 main.go:223] Handling node with IPs: map[172.23.106.39:{}]
	I0514 00:00:01.480601       1 main.go:227] handling current node
	I0514 00:00:01.480656       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:00:01.480680       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:00:11.491339       1 main.go:223] Handling node with IPs: map[172.23.106.39:{}]
	I0514 00:00:11.491491       1 main.go:227] handling current node
	I0514 00:00:11.491507       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:00:11.491515       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:00:21.497398       1 main.go:223] Handling node with IPs: map[172.23.106.39:{}]
	I0514 00:00:21.497436       1 main.go:227] handling current node
	I0514 00:00:21.497448       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:00:21.497456       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:00:31.506315       1 main.go:223] Handling node with IPs: map[172.23.106.39:{}]
	I0514 00:00:31.506432       1 main.go:227] handling current node
	I0514 00:00:31.506446       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:00:31.506454       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [06f1a683cad8] <==
	I0513 23:56:07.061681       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0513 23:56:07.070323       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0513 23:56:07.070391       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0513 23:56:08.119297       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0513 23:56:08.205583       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0513 23:56:08.305810       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0513 23:56:08.322236       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.106.39]
	I0513 23:56:08.323374       1 controller.go:615] quota admission added evaluator for: endpoints
	I0513 23:56:08.330941       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0513 23:56:09.135356       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0513 23:56:09.438800       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0513 23:56:09.469145       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0513 23:56:09.502427       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0513 23:56:23.088738       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0513 23:56:23.337903       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0513 23:59:51.169072       1 conn.go:339] Error on socket receive: read tcp 172.23.106.39:8443->172.23.96.1:51921: use of closed network connection
	E0513 23:59:51.602385       1 conn.go:339] Error on socket receive: read tcp 172.23.106.39:8443->172.23.96.1:51923: use of closed network connection
	E0513 23:59:52.070032       1 conn.go:339] Error on socket receive: read tcp 172.23.106.39:8443->172.23.96.1:51925: use of closed network connection
	E0513 23:59:52.483288       1 conn.go:339] Error on socket receive: read tcp 172.23.106.39:8443->172.23.96.1:51927: use of closed network connection
	E0513 23:59:52.891128       1 conn.go:339] Error on socket receive: read tcp 172.23.106.39:8443->172.23.96.1:51929: use of closed network connection
	E0513 23:59:53.310332       1 conn.go:339] Error on socket receive: read tcp 172.23.106.39:8443->172.23.96.1:51931: use of closed network connection
	E0513 23:59:54.059969       1 conn.go:339] Error on socket receive: read tcp 172.23.106.39:8443->172.23.96.1:51934: use of closed network connection
	E0514 00:00:04.461263       1 conn.go:339] Error on socket receive: read tcp 172.23.106.39:8443->172.23.96.1:51936: use of closed network connection
	E0514 00:00:04.884305       1 conn.go:339] Error on socket receive: read tcp 172.23.106.39:8443->172.23.96.1:51942: use of closed network connection
	E0514 00:00:15.313370       1 conn.go:339] Error on socket receive: read tcp 172.23.106.39:8443->172.23.96.1:51944: use of closed network connection
	
	
	==> kube-controller-manager [e96f94398d6d] <==
	I0513 23:56:23.736584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.765409ms"
	I0513 23:56:23.736691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.105µs"
	I0513 23:56:23.741069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.307µs"
	I0513 23:56:24.558346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.410112ms"
	I0513 23:56:24.599621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.388659ms"
	I0513 23:56:24.599778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.705µs"
	I0513 23:56:35.460855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.604µs"
	I0513 23:56:35.495875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.404µs"
	I0513 23:56:36.868700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.505µs"
	I0513 23:56:36.916603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.935352ms"
	I0513 23:56:36.917123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.803µs"
	I0513 23:56:37.577172       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0513 23:59:02.230067       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0513 23:59:02.246355       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m02" podCIDRs=["10.244.1.0/24"]
	I0513 23:59:02.603699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0513 23:59:22.527169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0513 23:59:45.791856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.887671ms"
	I0513 23:59:45.808219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.096894ms"
	I0513 23:59:45.808747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.005µs"
	I0513 23:59:45.809833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.705µs"
	I0513 23:59:45.811263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.604µs"
	I0513 23:59:48.526617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.926472ms"
	I0513 23:59:48.529326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.302µs"
	I0513 23:59:48.555529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.972453ms"
	I0513 23:59:48.556317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.601µs"
	
	
	==> kube-proxy [91edaaa00da2] <==
	I0513 23:56:24.901713       1 server_linux.go:69] "Using iptables proxy"
	I0513 23:56:24.929714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.106.39"]
	I0513 23:56:24.982680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0513 23:56:24.982795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0513 23:56:24.982816       1 server_linux.go:165] "Using iptables Proxier"
	I0513 23:56:24.988669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 23:56:24.989566       1 server.go:872] "Version info" version="v1.30.0"
	I0513 23:56:24.989671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 23:56:24.992700       1 config.go:192] "Starting service config controller"
	I0513 23:56:24.993131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 23:56:24.993327       1 config.go:101] "Starting endpoint slice config controller"
	I0513 23:56:24.993339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 23:56:24.994714       1 config.go:319] "Starting node config controller"
	I0513 23:56:24.994744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 23:56:25.094420       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0513 23:56:25.094530       1 shared_informer.go:320] Caches are synced for service config
	I0513 23:56:25.094981       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [964887fc5d36] <==
	W0513 23:56:07.344429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0513 23:56:07.344853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0513 23:56:07.410556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0513 23:56:07.410716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0513 23:56:07.423084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0513 23:56:07.423126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0513 23:56:07.467897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0513 23:56:07.467939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0513 23:56:07.484903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0513 23:56:07.485019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0513 23:56:07.545758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 23:56:07.546087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0513 23:56:07.573884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0513 23:56:07.573980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0513 23:56:07.633780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0513 23:56:07.633901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0513 23:56:07.680821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 23:56:07.680938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0513 23:56:07.704130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 23:56:07.704357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0513 23:56:07.736914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 23:56:07.737079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 23:56:07.754367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0513 23:56:07.754798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0513 23:56:09.676327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 13 23:56:36 multinode-101100 kubelet[2107]: I0513 23:56:36.893644    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podStartSLOduration=13.893611165 podStartE2EDuration="13.893611165s" podCreationTimestamp="2024-05-13 23:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-13 23:56:36.866862342 +0000 UTC m=+27.604789888" watchObservedRunningTime="2024-05-13 23:56:36.893611165 +0000 UTC m=+27.631538811"
	May 13 23:57:09 multinode-101100 kubelet[2107]: E0513 23:57:09.476141    2107 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:57:09 multinode-101100 kubelet[2107]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:57:09 multinode-101100 kubelet[2107]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:57:09 multinode-101100 kubelet[2107]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:57:09 multinode-101100 kubelet[2107]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 23:58:09 multinode-101100 kubelet[2107]: E0513 23:58:09.470247    2107 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:58:09 multinode-101100 kubelet[2107]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:58:09 multinode-101100 kubelet[2107]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:58:09 multinode-101100 kubelet[2107]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:58:09 multinode-101100 kubelet[2107]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 23:59:09 multinode-101100 kubelet[2107]: E0513 23:59:09.470375    2107 iptables.go:577] "Could not set up iptables canary" err=<
	May 13 23:59:09 multinode-101100 kubelet[2107]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 13 23:59:09 multinode-101100 kubelet[2107]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 13 23:59:09 multinode-101100 kubelet[2107]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 13 23:59:09 multinode-101100 kubelet[2107]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 13 23:59:45 multinode-101100 kubelet[2107]: I0513 23:59:45.772021    2107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=195.771934435 podStartE2EDuration="3m15.771934435s" podCreationTimestamp="2024-05-13 23:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-13 23:56:36.942247115 +0000 UTC m=+27.680174661" watchObservedRunningTime="2024-05-13 23:59:45.771934435 +0000 UTC m=+216.509862081"
	May 13 23:59:45 multinode-101100 kubelet[2107]: I0513 23:59:45.774511    2107 topology_manager.go:215] "Topology Admit Handler" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae" podNamespace="default" podName="busybox-fc5497c4f-xqj6w"
	May 13 23:59:45 multinode-101100 kubelet[2107]: I0513 23:59:45.914424    2107 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwkj4\" (UniqueName: \"kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4\") pod \"busybox-fc5497c4f-xqj6w\" (UID: \"106df673-68ba-43dd-8a94-1e41aeb3cfae\") " pod="default/busybox-fc5497c4f-xqj6w"
	May 13 23:59:46 multinode-101100 kubelet[2107]: I0513 23:59:46.475047    2107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2"
	May 14 00:00:09 multinode-101100 kubelet[2107]: E0514 00:00:09.470464    2107 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:00:09 multinode-101100 kubelet[2107]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:00:09 multinode-101100 kubelet[2107]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:00:09 multinode-101100 kubelet[2107]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:00:09 multinode-101100 kubelet[2107]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:00:26.168893    8984 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-101100 -n multinode-101100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-101100 -n multinode-101100: (10.7951577s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-101100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (52.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (594.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-101100
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-101100
E0514 00:13:33.020264    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-101100: (1m32.5123917s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-101100 --wait=true -v=8 --alsologtostderr
E0514 00:17:50.840415    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0514 00:18:33.034987    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0514 00:20:54.053891    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-101100 --wait=true -v=8 --alsologtostderr: (7m41.1436414s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-101100
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-101100	172.23.106.39
multinode-101100-m02	172.23.109.58
multinode-101100-m03	172.23.102.231

                                                
                                                
After restart: multinode-101100	172.23.102.122
multinode-101100-m02	172.23.97.128
multinode-101100-m03	172.23.111.37
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-101100 -n multinode-101100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-101100 -n multinode-101100: (10.7029287s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 logs -n 25
E0514 00:22:50.855982    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 logs -n 25: (11.2583811s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                          Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:06 UTC | 14 May 24 00:06 UTC |
	|         | multinode-101100-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:06 UTC | 14 May 24 00:06 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile439564435\001\cp-test_multinode-101100-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:06 UTC | 14 May 24 00:07 UTC |
	|         | multinode-101100-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:07 UTC |
	|         | multinode-101100:/home/docker/cp-test_multinode-101100-m02_multinode-101100.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:07 UTC |
	|         | multinode-101100-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n multinode-101100 sudo cat                                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:07 UTC |
	|         | /home/docker/cp-test_multinode-101100-m02_multinode-101100.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:07 UTC |
	|         | multinode-101100-m03:/home/docker/cp-test_multinode-101100-m02_multinode-101100-m03.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:07 UTC |
	|         | multinode-101100-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n multinode-101100-m03 sudo cat                                                                   | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:08 UTC |
	|         | /home/docker/cp-test_multinode-101100-m02_multinode-101100-m03.txt                                                      |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp testdata\cp-test.txt                                                                                | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | multinode-101100-m03:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | multinode-101100-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile439564435\001\cp-test_multinode-101100-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | multinode-101100-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | multinode-101100:/home/docker/cp-test_multinode-101100-m03_multinode-101100.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | multinode-101100-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n multinode-101100 sudo cat                                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:09 UTC |
	|         | /home/docker/cp-test_multinode-101100-m03_multinode-101100.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:09 UTC | 14 May 24 00:09 UTC |
	|         | multinode-101100-m02:/home/docker/cp-test_multinode-101100-m03_multinode-101100-m02.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:09 UTC | 14 May 24 00:09 UTC |
	|         | multinode-101100-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n multinode-101100-m02 sudo cat                                                                   | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:09 UTC | 14 May 24 00:09 UTC |
	|         | /home/docker/cp-test_multinode-101100-m03_multinode-101100-m02.txt                                                      |                  |                   |         |                     |                     |
	| node    | multinode-101100 node stop m03                                                                                          | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:09 UTC | 14 May 24 00:09 UTC |
	| node    | multinode-101100 node start                                                                                             | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:10 UTC | 14 May 24 00:12 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                              |                  |                   |         |                     |                     |
	| node    | list -p multinode-101100                                                                                                | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:13 UTC |                     |
	| stop    | -p multinode-101100                                                                                                     | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:13 UTC | 14 May 24 00:14 UTC |
	| start   | -p multinode-101100                                                                                                     | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:14 UTC | 14 May 24 00:22 UTC |
	|         | --wait=true -v=8                                                                                                        |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                       |                  |                   |         |                     |                     |
	| node    | list -p multinode-101100                                                                                                | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:22 UTC |                     |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/14 00:14:56
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0514 00:14:56.185714    4316 out.go:291] Setting OutFile to fd 880 ...
	I0514 00:14:56.186038    4316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 00:14:56.186038    4316 out.go:304] Setting ErrFile to fd 968...
	I0514 00:14:56.186038    4316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 00:14:56.205486    4316 out.go:298] Setting JSON to false
	I0514 00:14:56.208459    4316 start.go:129] hostinfo: {"hostname":"minikube5","uptime":7259,"bootTime":1715638436,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0514 00:14:56.208459    4316 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0514 00:14:56.349739    4316 out.go:177] * [multinode-101100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0514 00:14:56.395109    4316 notify.go:220] Checking for updates...
	I0514 00:14:56.554164    4316 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:14:56.757342    4316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0514 00:14:56.904945    4316 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0514 00:14:57.042288    4316 out.go:177]   - MINIKUBE_LOCATION=18872
	I0514 00:14:57.142144    4316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0514 00:14:57.296370    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:14:57.296934    4316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0514 00:15:02.363917    4316 out.go:177] * Using the hyperv driver based on existing profile
	I0514 00:15:02.408815    4316 start.go:297] selected driver: hyperv
	I0514 00:15:02.409275    4316 start.go:901] validating driver "hyperv" against &{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.106.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.102.231 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fa
lse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 00:15:02.409586    4316 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0514 00:15:02.452500    4316 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 00:15:02.453496    4316 cni.go:84] Creating CNI manager for ""
	I0514 00:15:02.453496    4316 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0514 00:15:02.453639    4316 start.go:340] cluster config:
	{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.106.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.102.231 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 00:15:02.453996    4316 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0514 00:15:02.507202    4316 out.go:177] * Starting "multinode-101100" primary control-plane node in "multinode-101100" cluster
	I0514 00:15:02.510874    4316 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 00:15:02.511223    4316 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0514 00:15:02.511223    4316 cache.go:56] Caching tarball of preloaded images
	I0514 00:15:02.511411    4316 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0514 00:15:02.511411    4316 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0514 00:15:02.512312    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:15:02.515317    4316 start.go:360] acquireMachinesLock for multinode-101100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0514 00:15:02.515317    4316 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-101100"
	I0514 00:15:02.515317    4316 start.go:96] Skipping create...Using existing machine configuration
	I0514 00:15:02.515317    4316 fix.go:54] fixHost starting: 
	I0514 00:15:02.516003    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:05.006202    4316 main.go:141] libmachine: [stdout =====>] : Off
	
	I0514 00:15:05.006370    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:05.006370    4316 fix.go:112] recreateIfNeeded on multinode-101100: state=Stopped err=<nil>
	W0514 00:15:05.006370    4316 fix.go:138] unexpected machine state, will restart: <nil>
	I0514 00:15:05.009270    4316 out.go:177] * Restarting existing hyperv VM for "multinode-101100" ...
	I0514 00:15:05.013132    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-101100
	I0514 00:15:07.915262    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:15:07.915443    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:07.915443    4316 main.go:141] libmachine: Waiting for host to start...
	I0514 00:15:07.915506    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:09.985756    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:09.985756    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:09.985756    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:12.281832    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:15:12.281832    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:13.296646    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:15.289244    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:15.290314    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:15.290314    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:17.554873    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:15:17.554873    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:18.569060    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:20.499826    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:20.499826    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:20.499826    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:22.713351    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:15:22.713351    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:23.725580    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:25.689973    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:25.690050    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:25.690050    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:27.970131    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:15:27.970543    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:28.974492    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:30.950015    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:30.950015    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:30.950015    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:33.269358    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:33.269970    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:33.271964    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:35.155916    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:35.155916    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:35.155916    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:37.425806    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:37.426548    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:37.426548    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:15:37.428923    4316 machine.go:94] provisionDockerMachine start ...
	I0514 00:15:37.429023    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:39.378767    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:39.378767    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:39.379476    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:41.660453    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:41.660453    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:41.664778    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:15:41.665371    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:15:41.665371    4316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 00:15:41.789131    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0514 00:15:41.789131    4316 buildroot.go:166] provisioning hostname "multinode-101100"
	I0514 00:15:41.789131    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:43.658216    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:43.658741    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:43.658741    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:45.959367    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:45.959803    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:45.963564    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:15:45.964004    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:15:45.964004    4316 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-101100 && echo "multinode-101100" | sudo tee /etc/hostname
	I0514 00:15:46.113194    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101100
	
	I0514 00:15:46.113194    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:48.037299    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:48.037299    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:48.037299    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:50.304945    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:50.304945    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:50.309336    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:15:50.309848    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:15:50.309848    4316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-101100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-101100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-101100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 00:15:50.454395    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 00:15:50.454566    4316 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 00:15:50.454566    4316 buildroot.go:174] setting up certificates
	I0514 00:15:50.454566    4316 provision.go:84] configureAuth start
	I0514 00:15:50.454566    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:52.344110    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:52.344807    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:52.345142    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:54.665648    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:54.665648    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:54.665648    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:56.577827    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:56.577827    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:56.578937    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:58.947308    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:58.947418    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:58.947418    4316 provision.go:143] copyHostCerts
	I0514 00:15:58.947598    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0514 00:15:58.947775    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 00:15:58.947867    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 00:15:58.948155    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 00:15:58.949029    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0514 00:15:58.949250    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 00:15:58.949250    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 00:15:58.949547    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 00:15:58.950364    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0514 00:15:58.950364    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 00:15:58.950364    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 00:15:58.950364    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 00:15:58.951662    4316 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-101100 san=[127.0.0.1 172.23.102.122 localhost minikube multinode-101100]
	I0514 00:15:59.389335    4316 provision.go:177] copyRemoteCerts
	I0514 00:15:59.398611    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 00:15:59.398740    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:01.402063    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:01.402063    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:01.403107    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:03.739112    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:03.739112    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:03.739112    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:16:03.845665    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4467383s)
	I0514 00:16:03.845735    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0514 00:16:03.845857    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0514 00:16:03.899538    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0514 00:16:03.899960    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0514 00:16:03.950478    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0514 00:16:03.950478    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 00:16:03.991804    4316 provision.go:87] duration metric: took 13.5364113s to configureAuth
	I0514 00:16:03.991894    4316 buildroot.go:189] setting minikube options for container-runtime
	I0514 00:16:03.992600    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:16:03.992696    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:05.864478    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:05.864478    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:05.864478    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:08.115704    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:08.115704    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:08.118812    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:16:08.119401    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:16:08.119401    4316 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 00:16:08.248745    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 00:16:08.248818    4316 buildroot.go:70] root file system type: tmpfs
	I0514 00:16:08.248916    4316 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 00:16:08.248916    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:10.126009    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:10.126009    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:10.126666    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:12.366162    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:12.366162    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:12.370602    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:16:12.371197    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:16:12.371197    4316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 00:16:12.518398    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 00:16:12.518469    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:14.346708    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:14.346708    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:14.346708    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:16.561242    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:16.561352    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:16.566359    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:16:16.566886    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:16:16.567001    4316 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 00:16:18.958992    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 00:16:18.958992    4316 machine.go:97] duration metric: took 41.5275329s to provisionDockerMachine
	I0514 00:16:18.959976    4316 start.go:293] postStartSetup for "multinode-101100" (driver="hyperv")
	I0514 00:16:18.959976    4316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 00:16:18.968760    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 00:16:18.968760    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:20.830444    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:20.830444    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:20.830963    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:23.021443    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:23.021443    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:23.022004    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:16:23.127972    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1589562s)
	I0514 00:16:23.135911    4316 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 00:16:23.142708    4316 command_runner.go:130] > NAME=Buildroot
	I0514 00:16:23.142770    4316 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0514 00:16:23.142840    4316 command_runner.go:130] > ID=buildroot
	I0514 00:16:23.142840    4316 command_runner.go:130] > VERSION_ID=2023.02.9
	I0514 00:16:23.142894    4316 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0514 00:16:23.142975    4316 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 00:16:23.142975    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 00:16:23.142975    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 00:16:23.144321    4316 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 00:16:23.144321    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0514 00:16:23.152311    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 00:16:23.167204    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 00:16:23.208551    4316 start.go:296] duration metric: took 4.2483151s for postStartSetup
	I0514 00:16:23.208609    4316 fix.go:56] duration metric: took 1m20.6883818s for fixHost
	I0514 00:16:23.208676    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:25.059477    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:25.059477    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:25.059477    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:27.251865    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:27.251865    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:27.255851    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:16:27.255933    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:16:27.255933    4316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 00:16:27.393753    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715645787.622992710
	
	I0514 00:16:27.393753    4316 fix.go:216] guest clock: 1715645787.622992710
	I0514 00:16:27.393859    4316 fix.go:229] Guest: 2024-05-14 00:16:27.62299271 +0000 UTC Remote: 2024-05-14 00:16:23.2086094 +0000 UTC m=+87.138302401 (delta=4.41438331s)
	I0514 00:16:27.394004    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:29.282211    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:29.282211    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:29.282298    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:31.521171    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:31.521171    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:31.524707    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:16:31.525326    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:16:31.525326    4316 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715645787
	I0514 00:16:31.656871    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 00:16:27 UTC 2024
	
	I0514 00:16:31.656871    4316 fix.go:236] clock set: Tue May 14 00:16:27 UTC 2024
	 (err=<nil>)
	I0514 00:16:31.656871    4316 start.go:83] releasing machines lock for "multinode-101100", held for 1m29.136123s
	I0514 00:16:31.657876    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:33.514775    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:33.514775    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:33.515311    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:35.727156    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:35.727479    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:35.730496    4316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 00:16:35.730708    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:35.737940    4316 ssh_runner.go:195] Run: cat /version.json
	I0514 00:16:35.737940    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:37.650826    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:37.651706    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:37.651766    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:37.653750    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:37.653750    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:37.653750    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:39.992402    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:39.992402    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:39.992716    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:16:40.013262    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:40.013262    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:40.013982    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:16:40.170923    4316 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0514 00:16:40.170923    4316 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.440079s)
	I0514 00:16:40.170923    4316 command_runner.go:130] > {"iso_version": "v1.33.1", "kicbase_version": "v0.0.43-1714992375-18804", "minikube_version": "v1.33.1", "commit": "d6e0d89dd5607476c1efbac5f05c928d4cd7ef53"}
	I0514 00:16:40.170923    4316 ssh_runner.go:235] Completed: cat /version.json: (4.432709s)
	I0514 00:16:40.181732    4316 ssh_runner.go:195] Run: systemctl --version
	I0514 00:16:40.190102    4316 command_runner.go:130] > systemd 252 (252)
	I0514 00:16:40.190102    4316 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0514 00:16:40.201494    4316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0514 00:16:40.209136    4316 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0514 00:16:40.209862    4316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 00:16:40.217883    4316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 00:16:40.244144    4316 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0514 00:16:40.244710    4316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 00:16:40.244777    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:16:40.244814    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:16:40.274963    4316 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0514 00:16:40.285057    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 00:16:40.315083    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 00:16:40.341864    4316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 00:16:40.352949    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 00:16:40.378197    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:16:40.403394    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 00:16:40.434406    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:16:40.462651    4316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 00:16:40.488861    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 00:16:40.517167    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 00:16:40.548685    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 00:16:40.577045    4316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 00:16:40.591943    4316 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0514 00:16:40.600861    4316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 00:16:40.626460    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:40.820490    4316 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 00:16:40.852637    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:16:40.863007    4316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 00:16:40.883155    4316 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0514 00:16:40.883155    4316 command_runner.go:130] > [Unit]
	I0514 00:16:40.883155    4316 command_runner.go:130] > Description=Docker Application Container Engine
	I0514 00:16:40.883155    4316 command_runner.go:130] > Documentation=https://docs.docker.com
	I0514 00:16:40.883155    4316 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0514 00:16:40.883155    4316 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0514 00:16:40.883155    4316 command_runner.go:130] > StartLimitBurst=3
	I0514 00:16:40.883155    4316 command_runner.go:130] > StartLimitIntervalSec=60
	I0514 00:16:40.883155    4316 command_runner.go:130] > [Service]
	I0514 00:16:40.883155    4316 command_runner.go:130] > Type=notify
	I0514 00:16:40.883155    4316 command_runner.go:130] > Restart=on-failure
	I0514 00:16:40.883155    4316 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0514 00:16:40.883597    4316 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0514 00:16:40.883597    4316 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0514 00:16:40.883597    4316 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0514 00:16:40.883597    4316 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0514 00:16:40.883597    4316 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0514 00:16:40.883695    4316 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0514 00:16:40.883695    4316 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0514 00:16:40.883695    4316 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0514 00:16:40.883695    4316 command_runner.go:130] > ExecStart=
	I0514 00:16:40.883775    4316 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0514 00:16:40.883775    4316 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0514 00:16:40.883775    4316 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0514 00:16:40.883775    4316 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0514 00:16:40.883861    4316 command_runner.go:130] > LimitNOFILE=infinity
	I0514 00:16:40.883861    4316 command_runner.go:130] > LimitNPROC=infinity
	I0514 00:16:40.883861    4316 command_runner.go:130] > LimitCORE=infinity
	I0514 00:16:40.883861    4316 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0514 00:16:40.883861    4316 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0514 00:16:40.883928    4316 command_runner.go:130] > TasksMax=infinity
	I0514 00:16:40.883928    4316 command_runner.go:130] > TimeoutStartSec=0
	I0514 00:16:40.883928    4316 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0514 00:16:40.883928    4316 command_runner.go:130] > Delegate=yes
	I0514 00:16:40.883928    4316 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0514 00:16:40.883992    4316 command_runner.go:130] > KillMode=process
	I0514 00:16:40.883992    4316 command_runner.go:130] > [Install]
	I0514 00:16:40.883992    4316 command_runner.go:130] > WantedBy=multi-user.target
	I0514 00:16:40.893446    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:16:40.921952    4316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 00:16:40.955515    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:16:40.983495    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:16:41.012286    4316 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 00:16:41.067488    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:16:41.087023    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:16:41.116335    4316 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0514 00:16:41.127189    4316 ssh_runner.go:195] Run: which cri-dockerd
	I0514 00:16:41.133000    4316 command_runner.go:130] > /usr/bin/cri-dockerd
	I0514 00:16:41.141763    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 00:16:41.157407    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 00:16:41.199050    4316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 00:16:41.372093    4316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 00:16:41.524964    4316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 00:16:41.525288    4316 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 00:16:41.562963    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:41.735982    4316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 00:16:44.313444    4316 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5773018s)
	I0514 00:16:44.322479    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 00:16:44.357441    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:16:44.389854    4316 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 00:16:44.571917    4316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 00:16:44.733604    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:44.907417    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 00:16:44.941956    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:16:44.971809    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:45.153688    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 00:16:45.270309    4316 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 00:16:45.279530    4316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 00:16:45.292735    4316 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0514 00:16:45.292735    4316 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0514 00:16:45.292735    4316 command_runner.go:130] > Device: 0,22	Inode: 856         Links: 1
	I0514 00:16:45.292735    4316 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0514 00:16:45.292735    4316 command_runner.go:130] > Access: 2024-05-14 00:16:45.408202295 +0000
	I0514 00:16:45.292735    4316 command_runner.go:130] > Modify: 2024-05-14 00:16:45.408202295 +0000
	I0514 00:16:45.292735    4316 command_runner.go:130] > Change: 2024-05-14 00:16:45.412202572 +0000
	I0514 00:16:45.292735    4316 command_runner.go:130] >  Birth: -
	I0514 00:16:45.292735    4316 start.go:562] Will wait 60s for crictl version
	I0514 00:16:45.302798    4316 ssh_runner.go:195] Run: which crictl
	I0514 00:16:45.309565    4316 command_runner.go:130] > /usr/bin/crictl
	I0514 00:16:45.318466    4316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 00:16:45.363979    4316 command_runner.go:130] > Version:  0.1.0
	I0514 00:16:45.364568    4316 command_runner.go:130] > RuntimeName:  docker
	I0514 00:16:45.364568    4316 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0514 00:16:45.364568    4316 command_runner.go:130] > RuntimeApiVersion:  v1
	I0514 00:16:45.365985    4316 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 00:16:45.373806    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:16:45.398333    4316 command_runner.go:130] > 26.0.2
	I0514 00:16:45.406271    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:16:45.434253    4316 command_runner.go:130] > 26.0.2
	I0514 00:16:45.439147    4316 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 00:16:45.439323    4316 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 00:16:45.443156    4316 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 00:16:45.443156    4316 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 00:16:45.443211    4316 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 00:16:45.443211    4316 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 00:16:45.445096    4316 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 00:16:45.445096    4316 ip.go:210] interface addr: 172.23.96.1/20
	I0514 00:16:45.452094    4316 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 00:16:45.458825    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:16:45.478357    4316 kubeadm.go:877] updating cluster {Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.122 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.102.231 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-
provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0514 00:16:45.478606    4316 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 00:16:45.485091    4316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0514 00:16:45.506395    4316 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0514 00:16:45.506395    4316 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0514 00:16:45.506395    4316 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0514 00:16:45.506395    4316 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0514 00:16:45.506395    4316 docker.go:615] Images already preloaded, skipping extraction
	I0514 00:16:45.514627    4316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0514 00:16:45.535349    4316 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0514 00:16:45.535799    4316 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0514 00:16:45.535799    4316 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0514 00:16:45.536313    4316 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0514 00:16:45.536398    4316 cache_images.go:84] Images are preloaded, skipping loading
	I0514 00:16:45.536398    4316 kubeadm.go:928] updating node { 172.23.102.122 8443 v1.30.0 docker true true} ...
	I0514 00:16:45.536570    4316 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-101100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.102.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0514 00:16:45.543082    4316 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0514 00:16:45.571915    4316 command_runner.go:130] > cgroupfs
	I0514 00:16:45.572196    4316 cni.go:84] Creating CNI manager for ""
	I0514 00:16:45.572196    4316 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0514 00:16:45.572264    4316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0514 00:16:45.572343    4316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.102.122 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-101100 NodeName:multinode-101100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.102.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.102.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0514 00:16:45.572629    4316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.102.122
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-101100"
	  kubeletExtraArgs:
	    node-ip: 172.23.102.122
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.102.122"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0514 00:16:45.584627    4316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 00:16:45.603423    4316 command_runner.go:130] > kubeadm
	I0514 00:16:45.603457    4316 command_runner.go:130] > kubectl
	I0514 00:16:45.603457    4316 command_runner.go:130] > kubelet
	I0514 00:16:45.603511    4316 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 00:16:45.613121    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0514 00:16:45.629761    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0514 00:16:45.668552    4316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 00:16:45.696749    4316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0514 00:16:45.737685    4316 ssh_runner.go:195] Run: grep 172.23.102.122	control-plane.minikube.internal$ /etc/hosts
	I0514 00:16:45.744447    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.102.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:16:45.770880    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:45.928609    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:16:45.953422    4316 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100 for IP: 172.23.102.122
	I0514 00:16:45.953422    4316 certs.go:194] generating shared ca certs ...
	I0514 00:16:45.953422    4316 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:45.954202    4316 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 00:16:45.954389    4316 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 00:16:45.954389    4316 certs.go:256] generating profile certs ...
	I0514 00:16:45.955082    4316 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\client.key
	I0514 00:16:45.955155    4316 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.d596c974
	I0514 00:16:45.955155    4316 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.d596c974 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.102.122]
	I0514 00:16:46.073965    4316 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.d596c974 ...
	I0514 00:16:46.073965    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.d596c974: {Name:mk0abe85a6f763d7b15aec7cf028af93a3b41188 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:46.075203    4316 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.d596c974 ...
	I0514 00:16:46.075203    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.d596c974: {Name:mkc641951683ee38c2ef89b0e9f4e36ad27cbf87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:46.075830    4316 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.d596c974 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt
	I0514 00:16:46.086730    4316 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.d596c974 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key
	I0514 00:16:46.088198    4316 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key
	I0514 00:16:46.088198    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0514 00:16:46.088590    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0514 00:16:46.088590    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0514 00:16:46.088590    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0514 00:16:46.088590    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0514 00:16:46.088590    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0514 00:16:46.089189    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0514 00:16:46.089189    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0514 00:16:46.089783    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 00:16:46.089783    4316 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 00:16:46.089783    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 00:16:46.089783    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 00:16:46.090380    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 00:16:46.090380    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 00:16:46.090949    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 00:16:46.091048    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0514 00:16:46.091048    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:16:46.091048    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0514 00:16:46.092203    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 00:16:46.136905    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 00:16:46.185458    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 00:16:46.233388    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 00:16:46.277608    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0514 00:16:46.320142    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0514 00:16:46.362716    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0514 00:16:46.405730    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0514 00:16:46.447453    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 00:16:46.488234    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 00:16:46.530905    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 00:16:46.578512    4316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0514 00:16:46.623582    4316 ssh_runner.go:195] Run: openssl version
	I0514 00:16:46.631851    4316 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0514 00:16:46.641440    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 00:16:46.666121    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 00:16:46.672639    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:16:46.673480    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:16:46.681837    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 00:16:46.689880    4316 command_runner.go:130] > 3ec20f2e
	I0514 00:16:46.699676    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 00:16:46.728150    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 00:16:46.754886    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:16:46.761345    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:16:46.761345    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:16:46.770119    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:16:46.781912    4316 command_runner.go:130] > b5213941
	I0514 00:16:46.790612    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 00:16:46.817917    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 00:16:46.846604    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 00:16:46.854720    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:16:46.854720    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:16:46.866338    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 00:16:46.874929    4316 command_runner.go:130] > 51391683
	I0514 00:16:46.885080    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 00:16:46.914689    4316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 00:16:46.922686    4316 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 00:16:46.922686    4316 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0514 00:16:46.922686    4316 command_runner.go:130] > Device: 8,1	Inode: 4196178     Links: 1
	I0514 00:16:46.922875    4316 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0514 00:16:46.922928    4316 command_runner.go:130] > Access: 2024-05-13 23:55:59.004892352 +0000
	I0514 00:16:46.922928    4316 command_runner.go:130] > Modify: 2024-05-13 23:55:59.004892352 +0000
	I0514 00:16:46.922928    4316 command_runner.go:130] > Change: 2024-05-13 23:55:59.004892352 +0000
	I0514 00:16:46.922928    4316 command_runner.go:130] >  Birth: 2024-05-13 23:55:59.004892352 +0000
	I0514 00:16:46.932037    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0514 00:16:46.940908    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:46.949788    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0514 00:16:46.958930    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:46.968372    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0514 00:16:46.977315    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:46.985536    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0514 00:16:46.995597    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:47.002968    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0514 00:16:47.011730    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:47.019252    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0514 00:16:47.027599    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:47.029084    4316 kubeadm.go:391] StartCluster: {Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.122 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.102.231 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 00:16:47.036513    4316 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 00:16:47.065945    4316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0514 00:16:47.082783    4316 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0514 00:16:47.082874    4316 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0514 00:16:47.082874    4316 command_runner.go:130] > /var/lib/minikube/etcd:
	I0514 00:16:47.082874    4316 command_runner.go:130] > member
	W0514 00:16:47.082994    4316 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0514 00:16:47.082994    4316 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0514 00:16:47.083053    4316 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0514 00:16:47.091039    4316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0514 00:16:47.109091    4316 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0514 00:16:47.110220    4316 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-101100" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:16:47.110619    4316 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-101100" cluster setting kubeconfig missing "multinode-101100" context setting]
	I0514 00:16:47.111367    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:47.123911    4316 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:16:47.124910    4316 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.102.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 00:16:47.125257    4316 cert_rotation.go:137] Starting client certificate rotation controller
	I0514 00:16:47.134253    4316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0514 00:16:47.150072    4316 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0514 00:16:47.150072    4316 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0514 00:16:47.151207    4316 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0514 00:16:47.151207    4316 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0514 00:16:47.151207    4316 command_runner.go:130] >  kind: InitConfiguration
	I0514 00:16:47.151207    4316 command_runner.go:130] >  localAPIEndpoint:
	I0514 00:16:47.151207    4316 command_runner.go:130] > -  advertiseAddress: 172.23.106.39
	I0514 00:16:47.151207    4316 command_runner.go:130] > +  advertiseAddress: 172.23.102.122
	I0514 00:16:47.151207    4316 command_runner.go:130] >    bindPort: 8443
	I0514 00:16:47.151259    4316 command_runner.go:130] >  bootstrapTokens:
	I0514 00:16:47.151259    4316 command_runner.go:130] >    - groups:
	I0514 00:16:47.151259    4316 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0514 00:16:47.151259    4316 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0514 00:16:47.151285    4316 command_runner.go:130] >    name: "multinode-101100"
	I0514 00:16:47.151285    4316 command_runner.go:130] >    kubeletExtraArgs:
	I0514 00:16:47.151285    4316 command_runner.go:130] > -    node-ip: 172.23.106.39
	I0514 00:16:47.151285    4316 command_runner.go:130] > +    node-ip: 172.23.102.122
	I0514 00:16:47.151285    4316 command_runner.go:130] >    taints: []
	I0514 00:16:47.151285    4316 command_runner.go:130] >  ---
	I0514 00:16:47.151285    4316 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0514 00:16:47.151347    4316 command_runner.go:130] >  kind: ClusterConfiguration
	I0514 00:16:47.151411    4316 command_runner.go:130] >  apiServer:
	I0514 00:16:47.151411    4316 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.23.106.39"]
	I0514 00:16:47.151411    4316 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.23.102.122"]
	I0514 00:16:47.151411    4316 command_runner.go:130] >    extraArgs:
	I0514 00:16:47.151411    4316 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0514 00:16:47.151411    4316 command_runner.go:130] >  controllerManager:
	I0514 00:16:47.151655    4316 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.23.106.39
	+  advertiseAddress: 172.23.102.122
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-101100"
	   kubeletExtraArgs:
	-    node-ip: 172.23.106.39
	+    node-ip: 172.23.102.122
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.23.106.39"]
	+  certSANs: ["127.0.0.1", "localhost", "172.23.102.122"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0514 00:16:47.151740    4316 kubeadm.go:1154] stopping kube-system containers ...
	I0514 00:16:47.159132    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 00:16:47.182689    4316 command_runner.go:130] > 76c5ab7859ef
	I0514 00:16:47.182765    4316 command_runner.go:130] > e6ee22ee5c1b
	I0514 00:16:47.182765    4316 command_runner.go:130] > 8f7c140951f4
	I0514 00:16:47.182765    4316 command_runner.go:130] > 8bb49b28c842
	I0514 00:16:47.182805    4316 command_runner.go:130] > 9c4eb727cedb
	I0514 00:16:47.182805    4316 command_runner.go:130] > 91edaaa00da2
	I0514 00:16:47.182805    4316 command_runner.go:130] > 90d7537422a8
	I0514 00:16:47.182834    4316 command_runner.go:130] > 9bd694480978
	I0514 00:16:47.182834    4316 command_runner.go:130] > eda79d47d28f
	I0514 00:16:47.182834    4316 command_runner.go:130] > e96f94398d6d
	I0514 00:16:47.182874    4316 command_runner.go:130] > 964887fc5d36
	I0514 00:16:47.182874    4316 command_runner.go:130] > 06f1a683cad8
	I0514 00:16:47.182905    4316 command_runner.go:130] > da9268fd6556
	I0514 00:16:47.182905    4316 command_runner.go:130] > 287e744a4dc2
	I0514 00:16:47.182905    4316 command_runner.go:130] > ad0550a5dabf
	I0514 00:16:47.182905    4316 command_runner.go:130] > fcb3b27edcd2
	I0514 00:16:47.182974    4316 docker.go:483] Stopping containers: [76c5ab7859ef e6ee22ee5c1b 8f7c140951f4 8bb49b28c842 9c4eb727cedb 91edaaa00da2 90d7537422a8 9bd694480978 eda79d47d28f e96f94398d6d 964887fc5d36 06f1a683cad8 da9268fd6556 287e744a4dc2 ad0550a5dabf fcb3b27edcd2]
	I0514 00:16:47.190450    4316 ssh_runner.go:195] Run: docker stop 76c5ab7859ef e6ee22ee5c1b 8f7c140951f4 8bb49b28c842 9c4eb727cedb 91edaaa00da2 90d7537422a8 9bd694480978 eda79d47d28f e96f94398d6d 964887fc5d36 06f1a683cad8 da9268fd6556 287e744a4dc2 ad0550a5dabf fcb3b27edcd2
	I0514 00:16:47.209602    4316 command_runner.go:130] > 76c5ab7859ef
	I0514 00:16:47.209602    4316 command_runner.go:130] > e6ee22ee5c1b
	I0514 00:16:47.209602    4316 command_runner.go:130] > 8f7c140951f4
	I0514 00:16:47.214857    4316 command_runner.go:130] > 8bb49b28c842
	I0514 00:16:47.214857    4316 command_runner.go:130] > 9c4eb727cedb
	I0514 00:16:47.214857    4316 command_runner.go:130] > 91edaaa00da2
	I0514 00:16:47.214857    4316 command_runner.go:130] > 90d7537422a8
	I0514 00:16:47.214857    4316 command_runner.go:130] > 9bd694480978
	I0514 00:16:47.214857    4316 command_runner.go:130] > eda79d47d28f
	I0514 00:16:47.215291    4316 command_runner.go:130] > e96f94398d6d
	I0514 00:16:47.215357    4316 command_runner.go:130] > 964887fc5d36
	I0514 00:16:47.215357    4316 command_runner.go:130] > 06f1a683cad8
	I0514 00:16:47.215357    4316 command_runner.go:130] > da9268fd6556
	I0514 00:16:47.215970    4316 command_runner.go:130] > 287e744a4dc2
	I0514 00:16:47.216196    4316 command_runner.go:130] > ad0550a5dabf
	I0514 00:16:47.216196    4316 command_runner.go:130] > fcb3b27edcd2
	I0514 00:16:47.228375    4316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0514 00:16:47.261413    4316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0514 00:16:47.276310    4316 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0514 00:16:47.276310    4316 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0514 00:16:47.276928    4316 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0514 00:16:47.277108    4316 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0514 00:16:47.277261    4316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0514 00:16:47.277301    4316 kubeadm.go:156] found existing configuration files:
	
	I0514 00:16:47.289673    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0514 00:16:47.306806    4316 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0514 00:16:47.306806    4316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0514 00:16:47.317953    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0514 00:16:47.341565    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0514 00:16:47.357495    4316 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0514 00:16:47.357495    4316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0514 00:16:47.365755    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0514 00:16:47.391097    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0514 00:16:47.406813    4316 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0514 00:16:47.407574    4316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0514 00:16:47.417933    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0514 00:16:47.442107    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0514 00:16:47.462703    4316 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0514 00:16:47.463307    4316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0514 00:16:47.471330    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0514 00:16:47.496097    4316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0514 00:16:47.512818    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:47.719250    4316 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0514 00:16:47.719250    4316 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0514 00:16:47.719747    4316 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0514 00:16:47.720034    4316 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0514 00:16:47.721726    4316 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0514 00:16:47.721812    4316 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0514 00:16:47.723049    4316 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0514 00:16:47.723386    4316 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0514 00:16:47.723740    4316 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0514 00:16:47.723740    4316 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0514 00:16:47.724272    4316 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0514 00:16:47.727309    4316 command_runner.go:130] > [certs] Using the existing "sa" key
	I0514 00:16:47.729797    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:49.151260    4316 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0514 00:16:49.151750    4316 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0514 00:16:49.151750    4316 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0514 00:16:49.151750    4316 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0514 00:16:49.151750    4316 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0514 00:16:49.151855    4316 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0514 00:16:49.151855    4316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4219695s)
	I0514 00:16:49.151855    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:49.238314    4316 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0514 00:16:49.239346    4316 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0514 00:16:49.239346    4316 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0514 00:16:49.414673    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:49.515362    4316 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0514 00:16:49.515486    4316 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0514 00:16:49.515486    4316 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0514 00:16:49.515486    4316 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0514 00:16:49.515592    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:49.609805    4316 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0514 00:16:49.609955    4316 api_server.go:52] waiting for apiserver process to appear ...
	I0514 00:16:49.621169    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:16:50.127859    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:16:50.635206    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:16:51.124082    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:16:51.633189    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:16:51.656202    4316 command_runner.go:130] > 1838
	I0514 00:16:51.657036    4316 api_server.go:72] duration metric: took 2.0470115s to wait for apiserver process to appear ...
	I0514 00:16:51.657239    4316 api_server.go:88] waiting for apiserver healthz status ...
	I0514 00:16:51.657363    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:54.585189    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0514 00:16:54.585189    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0514 00:16:54.585189    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:54.624538    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0514 00:16:54.624538    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0514 00:16:54.665959    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:54.707569    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 00:16:54.707646    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 00:16:55.172587    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:55.182411    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 00:16:55.182507    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 00:16:55.659989    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:55.673996    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 00:16:55.673996    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 00:16:56.166856    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:56.183940    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 00:16:56.183940    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 00:16:56.658537    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:56.671344    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 200:
	ok
	I0514 00:16:56.671578    4316 round_trippers.go:463] GET https://172.23.102.122:8443/version
	I0514 00:16:56.671578    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:56.671578    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:56.671578    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:56.705098    4316 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0514 00:16:56.705098    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:56.705098    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:56 GMT
	I0514 00:16:56.705098    4316 round_trippers.go:580]     Audit-Id: c7c20ff9-70cd-4060-84d7-ec8bf3825c2a
	I0514 00:16:56.705098    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:56.705098    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:56.705098    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:56.705098    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:56.705098    4316 round_trippers.go:580]     Content-Length: 263
	I0514 00:16:56.705911    4316 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0514 00:16:56.706007    4316 api_server.go:141] control plane version: v1.30.0
	I0514 00:16:56.706007    4316 api_server.go:131] duration metric: took 5.048412s to wait for apiserver health ...
	I0514 00:16:56.706007    4316 cni.go:84] Creating CNI manager for ""
	I0514 00:16:56.706007    4316 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0514 00:16:56.708331    4316 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0514 00:16:56.718220    4316 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0514 00:16:56.724796    4316 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0514 00:16:56.725261    4316 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0514 00:16:56.725261    4316 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0514 00:16:56.725261    4316 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0514 00:16:56.725261    4316 command_runner.go:130] > Access: 2024-05-14 00:15:32.198040600 +0000
	I0514 00:16:56.725345    4316 command_runner.go:130] > Modify: 2024-05-09 03:04:38.000000000 +0000
	I0514 00:16:56.725345    4316 command_runner.go:130] > Change: 2024-05-14 00:15:21.020000000 +0000
	I0514 00:16:56.725345    4316 command_runner.go:130] >  Birth: -
	I0514 00:16:56.725491    4316 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0514 00:16:56.725491    4316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0514 00:16:56.783701    4316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0514 00:16:57.633064    4316 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0514 00:16:57.633064    4316 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0514 00:16:57.633382    4316 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0514 00:16:57.633382    4316 command_runner.go:130] > daemonset.apps/kindnet configured
	I0514 00:16:57.633464    4316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 00:16:57.633662    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:16:57.633662    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:57.633662    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:57.633662    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:57.639096    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:16:57.640095    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:57.640095    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:57.640095    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:57.640095    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:57.640095    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:57 GMT
	I0514 00:16:57.640095    4316 round_trippers.go:580]     Audit-Id: c0dd21b6-0c47-4067-b310-9b08bd0f7eec
	I0514 00:16:57.640180    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:57.641605    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1736"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87830 chars]
	I0514 00:16:57.647515    4316 system_pods.go:59] 12 kube-system pods found
	I0514 00:16:57.648051    4316 system_pods.go:61] "coredns-7db6d8ff4d-4kmx4" [06858a47-f51b-48d8-a2a6-f60b8107be13] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0514 00:16:57.648051    4316 system_pods.go:61] "etcd-multinode-101100" [74cd34fe-a56b-453d-afb3-a9db3db0d5ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0514 00:16:57.648051    4316 system_pods.go:61] "kindnet-2lwsm" [26b8beff-9849-4cbf-9a2b-8ef6354fa5ca] Running
	I0514 00:16:57.648051    4316 system_pods.go:61] "kindnet-9q2tv" [5b3ee167-f21f-46b3-bace-03a7233717e0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0514 00:16:57.648051    4316 system_pods.go:61] "kindnet-tfbt8" [95a6d195-9e10-4569-902b-b56e495c9b86] Running
	I0514 00:16:57.648051    4316 system_pods.go:61] "kube-apiserver-multinode-101100" [60889645-4c2d-4cfc-b322-c0f1b6e34503] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0514 00:16:57.648051    4316 system_pods.go:61] "kube-controller-manager-multinode-101100" [1a74381a-7477-4fd3-b344-c4a230014f97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0514 00:16:57.648152    4316 system_pods.go:61] "kube-proxy-8zsgn" [af208cbd-fa8a-4822-9b19-dc30f63fa59c] Running
	I0514 00:16:57.648152    4316 system_pods.go:61] "kube-proxy-b25hq" [d39f5818-3e88-4162-a7ce-734ca28103bf] Running
	I0514 00:16:57.648152    4316 system_pods.go:61] "kube-proxy-zhcz6" [a9a488af-41ba-47f3-87b0-5a2f062afad6] Running
	I0514 00:16:57.648152    4316 system_pods.go:61] "kube-scheduler-multinode-101100" [d7300c2d-377f-4061-bd34-5f7593b7e827] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0514 00:16:57.648152    4316 system_pods.go:61] "storage-provisioner" [a92f04b8-a93f-42d8-81d7-d4da6bf2e247] Running
	I0514 00:16:57.648197    4316 system_pods.go:74] duration metric: took 14.6876ms to wait for pod list to return data ...
	I0514 00:16:57.648197    4316 node_conditions.go:102] verifying NodePressure condition ...
	I0514 00:16:57.648239    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes
	I0514 00:16:57.648239    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:57.648239    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:57.648239    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:57.652816    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:16:57.653275    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:57.653275    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:57.653275    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:57.653275    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:57.653275    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:57 GMT
	I0514 00:16:57.653275    4316 round_trippers.go:580]     Audit-Id: 52f8cd9b-9478-4a5b-b2a9-7058f635ac93
	I0514 00:16:57.653275    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:57.653275    4316 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1736"},"items":[{"metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16289 chars]
	I0514 00:16:57.654007    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:16:57.654007    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:16:57.654007    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:16:57.654007    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:16:57.654007    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:16:57.654007    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:16:57.654007    4316 node_conditions.go:105] duration metric: took 5.8098ms to run NodePressure ...
	I0514 00:16:57.654007    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:57.891879    4316 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0514 00:16:57.985373    4316 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0514 00:16:57.989862    4316 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0514 00:16:57.990024    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0514 00:16:57.990024    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:57.990024    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:57.990077    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:57.996623    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:16:57.996623    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:57.996623    4316 round_trippers.go:580]     Audit-Id: c7babe1e-ef01-4342-82cc-e0291869b4ea
	I0514 00:16:57.996623    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:57.996623    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:57.996623    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:57.996623    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:57.996623    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:57.997762    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1740"},"items":[{"metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"74cd34fe-a56b-453d-afb3-a9db3db0d5ba","resourceVersion":"1710","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.122:2379","kubernetes.io/config.hash":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.mirror":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.seen":"2024-05-14T00:16:49.843121737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0514 00:16:57.999809    4316 kubeadm.go:733] kubelet initialised
	I0514 00:16:57.999912    4316 kubeadm.go:734] duration metric: took 10.05ms waiting for restarted kubelet to initialise ...
	I0514 00:16:57.999912    4316 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:16:58.000170    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:16:58.000170    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.000170    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.000170    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.004319    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:16:58.004319    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.004319    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.004319    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.004319    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.004319    4316 round_trippers.go:580]     Audit-Id: d35a7077-59c8-46af-8259-69aafd6d932f
	I0514 00:16:58.004319    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.004319    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.005490    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1740"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87830 chars]
	I0514 00:16:58.009394    4316 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.009512    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:16:58.009512    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.009512    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.009512    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.011831    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.011831    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.011831    4316 round_trippers.go:580]     Audit-Id: 7cbc2aea-a828-4341-b384-2cb1cc2ef98e
	I0514 00:16:58.011831    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.011831    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.011831    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.012786    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.012786    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.012855    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:16:58.013479    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:58.013542    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.013542    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.013542    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.015734    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.015734    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.015734    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.015734    4316 round_trippers.go:580]     Audit-Id: 0550ae30-001d-4590-99e5-444c9cac4998
	I0514 00:16:58.015734    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.015734    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.015734    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.015734    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.015734    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:58.015734    4316 pod_ready.go:97] node "multinode-101100" hosting pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.015734    4316 pod_ready.go:81] duration metric: took 6.3395ms for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.015734    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.015734    4316 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.015734    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-101100
	I0514 00:16:58.015734    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.016742    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.016742    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.018829    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.018829    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.018829    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.018829    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.018829    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.018829    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.018829    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.018829    4316 round_trippers.go:580]     Audit-Id: 82e3e21c-c444-40fb-90c7-62e3d45c1350
	I0514 00:16:58.019732    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"74cd34fe-a56b-453d-afb3-a9db3db0d5ba","resourceVersion":"1710","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.122:2379","kubernetes.io/config.hash":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.mirror":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.seen":"2024-05-14T00:16:49.843121737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0514 00:16:58.020200    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:58.020200    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.020200    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.020200    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.022398    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.022398    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.022398    4316 round_trippers.go:580]     Audit-Id: 1645c3a5-0c58-4f60-9aad-35356d67c1b2
	I0514 00:16:58.022398    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.022398    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.022398    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.022398    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.022398    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.022724    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:58.022724    4316 pod_ready.go:97] node "multinode-101100" hosting pod "etcd-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.022724    4316 pod_ready.go:81] duration metric: took 6.9898ms for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.022724    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "etcd-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.022724    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.023276    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-101100
	I0514 00:16:58.023276    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.023276    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.023276    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.029528    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:16:58.029528    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.029528    4316 round_trippers.go:580]     Audit-Id: 537c2268-6ff9-44f9-9117-ddc11414a511
	I0514 00:16:58.029528    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.029528    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.029528    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.029528    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.029528    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.029528    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-101100","namespace":"kube-system","uid":"60889645-4c2d-4cfc-b322-c0f1b6e34503","resourceVersion":"1709","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.122:8443","kubernetes.io/config.hash":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.mirror":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.seen":"2024-05-14T00:16:49.778409853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0514 00:16:58.030474    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:58.030474    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.030474    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.030474    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.033230    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.033230    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.033230    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.033230    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.033230    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.033230    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.033230    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.033230    4316 round_trippers.go:580]     Audit-Id: e5f41acb-e690-4a95-8a06-aff24eb7d538
	I0514 00:16:58.033230    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:58.033230    4316 pod_ready.go:97] node "multinode-101100" hosting pod "kube-apiserver-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.033230    4316 pod_ready.go:81] duration metric: took 10.5055ms for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.033230    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "kube-apiserver-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.033230    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.033230    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-101100
	I0514 00:16:58.033230    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.033230    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.033230    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.037041    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.037041    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.037041    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.037041    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.037041    4316 round_trippers.go:580]     Audit-Id: 91361b84-5dad-467f-b832-80619abdfac3
	I0514 00:16:58.037041    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.037041    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.037041    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.037582    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-101100","namespace":"kube-system","uid":"1a74381a-7477-4fd3-b344-c4a230014f97","resourceVersion":"1704","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.mirror":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.seen":"2024-05-13T23:56:09.392106640Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0514 00:16:58.038066    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:58.038120    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.038120    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.038120    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.040357    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.040357    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.040357    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.040357    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.040357    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.040357    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.040357    4316 round_trippers.go:580]     Audit-Id: 37ad5494-c885-496c-b557-e7961e1bdbfb
	I0514 00:16:58.040357    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.040357    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:58.040357    4316 pod_ready.go:97] node "multinode-101100" hosting pod "kube-controller-manager-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.040357    4316 pod_ready.go:81] duration metric: took 7.1259ms for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.040357    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "kube-controller-manager-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.040357    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.237011    4316 request.go:629] Waited for 196.6424ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:16:58.237323    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:16:58.237323    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.237323    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.237323    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.240917    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:58.241232    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.241232    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.241232    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.241232    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.241232    4316 round_trippers.go:580]     Audit-Id: 96720c27-9fb4-4bf9-8a0d-51a2002d1f62
	I0514 00:16:58.241232    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.241232    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.241232    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8zsgn","generateName":"kube-proxy-","namespace":"kube-system","uid":"af208cbd-fa8a-4822-9b19-dc30f63fa59c","resourceVersion":"1621","creationTimestamp":"2024-05-14T00:03:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0514 00:16:58.441599    4316 request.go:629] Waited for 199.1243ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:16:58.442008    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:16:58.442182    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.442253    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.442253    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.446036    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:58.446036    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.446036    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.446036    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.446036    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.446036    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.446036    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.446036    4316 round_trippers.go:580]     Audit-Id: d1de8feb-0016-4798-a45b-5a1efd685a68
	I0514 00:16:58.446534    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"fd2d4a0b-dc97-4959-b2ba-0f51719ad2b3","resourceVersion":"1631","creationTimestamp":"2024-05-14T00:12:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_12_45_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:12:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0514 00:16:58.446636    4316 pod_ready.go:97] node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:16:58.446636    4316 pod_ready.go:81] duration metric: took 406.2541ms for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.446636    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:16:58.446636    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.641794    4316 request.go:629] Waited for 195.0509ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:16:58.642006    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:16:58.642006    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.642006    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.642123    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.645512    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:58.645889    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.645889    4316 round_trippers.go:580]     Audit-Id: bebc959d-6568-4027-8765-e2df5b294951
	I0514 00:16:58.645889    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.645889    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.645889    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.645889    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.645889    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.646455    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-b25hq","generateName":"kube-proxy-","namespace":"kube-system","uid":"d39f5818-3e88-4162-a7ce-734ca28103bf","resourceVersion":"1641","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0514 00:16:58.844737    4316 request.go:629] Waited for 197.231ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:16:58.845129    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:16:58.845129    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.845129    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.845129    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.848706    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.848706    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.848706    4316 round_trippers.go:580]     Audit-Id: fbd681a6-2f5a-4f26-9724-f358a491c712
	I0514 00:16:58.848706    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.848706    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.848706    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.848706    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.848706    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:59 GMT
	I0514 00:16:58.848706    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"1642","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4485 chars]
	I0514 00:16:58.849465    4316 pod_ready.go:97] node "multinode-101100-m02" hosting pod "kube-proxy-b25hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m02" has status "Ready":"Unknown"
	I0514 00:16:58.849465    4316 pod_ready.go:81] duration metric: took 402.8036ms for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.849465    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100-m02" hosting pod "kube-proxy-b25hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m02" has status "Ready":"Unknown"
	I0514 00:16:58.849465    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:59.049100    4316 request.go:629] Waited for 199.4984ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:16:59.049330    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:16:59.049330    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:59.049330    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:59.049330    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:59.055481    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:16:59.055481    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:59.055481    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:59.055481    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:59.055481    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:59.055481    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:59 GMT
	I0514 00:16:59.055481    4316 round_trippers.go:580]     Audit-Id: aec56393-54ad-44f8-b47f-d1e7de7abac4
	I0514 00:16:59.055481    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:59.056154    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zhcz6","generateName":"kube-proxy-","namespace":"kube-system","uid":"a9a488af-41ba-47f3-87b0-5a2f062afad6","resourceVersion":"1732","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0514 00:16:59.236203    4316 request.go:629] Waited for 179.3737ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:59.236581    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:59.236581    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:59.236581    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:59.236581    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:59.240944    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:59.240944    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:59.241033    4316 round_trippers.go:580]     Audit-Id: 55207bf2-b020-41a6-8c4b-727e05a5a996
	I0514 00:16:59.241033    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:59.241033    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:59.241033    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:59.241033    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:59.241033    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:59 GMT
	I0514 00:16:59.241327    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:59.242013    4316 pod_ready.go:97] node "multinode-101100" hosting pod "kube-proxy-zhcz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:59.242088    4316 pod_ready.go:81] duration metric: took 392.5992ms for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:59.242088    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "kube-proxy-zhcz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:59.242088    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:59.439671    4316 request.go:629] Waited for 197.1241ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:16:59.439671    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:16:59.439671    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:59.439671    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:59.439671    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:59.443238    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:59.443238    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:59.443238    4316 round_trippers.go:580]     Audit-Id: 4b4375a1-8177-41f3-8456-500a26c3533d
	I0514 00:16:59.443238    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:59.443930    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:59.443930    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:59.443930    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:59.443930    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:59 GMT
	I0514 00:16:59.444103    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-101100","namespace":"kube-system","uid":"d7300c2d-377f-4061-bd34-5f7593b7e827","resourceVersion":"1707","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.mirror":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.seen":"2024-05-13T23:56:09.392108241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0514 00:16:59.643851    4316 request.go:629] Waited for 199.065ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:59.643933    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:59.643933    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:59.643933    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:59.643933    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:59.647809    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:59.647809    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:59.647809    4316 round_trippers.go:580]     Audit-Id: 105eb453-6f2b-40c5-8fce-367c353c5334
	I0514 00:16:59.647809    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:59.647809    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:59.647809    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:59.647809    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:59.647809    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:59 GMT
	I0514 00:16:59.647809    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:59.649098    4316 pod_ready.go:97] node "multinode-101100" hosting pod "kube-scheduler-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:59.649177    4316 pod_ready.go:81] duration metric: took 407.0633ms for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:59.649177    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "kube-scheduler-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:59.649272    4316 pod_ready.go:38] duration metric: took 1.6491558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:16:59.649363    4316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0514 00:16:59.664951    4316 command_runner.go:130] > -16
	I0514 00:16:59.665391    4316 ops.go:34] apiserver oom_adj: -16
	I0514 00:16:59.665391    4316 kubeadm.go:591] duration metric: took 12.5815566s to restartPrimaryControlPlane
	I0514 00:16:59.665391    4316 kubeadm.go:393] duration metric: took 12.6355889s to StartCluster
	I0514 00:16:59.665435    4316 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:59.665435    4316 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:16:59.667441    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:59.669204    4316 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.102.122 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 00:16:59.675726    4316 out.go:177] * Verifying Kubernetes components...
	I0514 00:16:59.669204    4316 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0514 00:16:59.669667    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:16:59.682526    4316 out.go:177] * Enabled addons: 
	I0514 00:16:59.685601    4316 addons.go:505] duration metric: took 16.4853ms for enable addons: enabled=[]
	I0514 00:16:59.689164    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:59.965406    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:16:59.992213    4316 node_ready.go:35] waiting up to 6m0s for node "multinode-101100" to be "Ready" ...
	I0514 00:16:59.992480    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:59.992501    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:59.992501    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:59.992501    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:59.998685    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:16:59.998685    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:59.998685    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:00 GMT
	I0514 00:16:59.998685    4316 round_trippers.go:580]     Audit-Id: cc997441-e608-4041-a627-6c2e185c47bb
	I0514 00:16:59.998685    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:59.998685    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:59.998685    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:59.998685    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:59.998685    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:00.504088    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:00.504088    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:00.504088    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:00.504227    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:00.508406    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:00.508406    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:00.508496    4316 round_trippers.go:580]     Audit-Id: 24eee483-7e08-4d31-8dfc-84088194d730
	I0514 00:17:00.508496    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:00.508496    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:00.508496    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:00.508496    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:00.508496    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:00 GMT
	I0514 00:17:00.509076    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:01.000406    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:01.000666    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:01.000666    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:01.000666    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:01.004008    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:01.004008    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:01.004185    4316 round_trippers.go:580]     Audit-Id: 37917791-6276-4b98-9b71-15aeddb0a44b
	I0514 00:17:01.004185    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:01.004185    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:01.004185    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:01.004185    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:01.004185    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:01 GMT
	I0514 00:17:01.004185    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:01.501049    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:01.501126    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:01.501126    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:01.501126    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:01.505226    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:01.505226    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:01.505759    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:01.505759    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:01.505759    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:01.505759    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:01 GMT
	I0514 00:17:01.505759    4316 round_trippers.go:580]     Audit-Id: 7fece720-b918-4aef-b59b-b9df2381c9b5
	I0514 00:17:01.505759    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:01.505892    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:02.001731    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:02.001731    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:02.001731    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:02.001731    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:02.005364    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:02.005364    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:02.005364    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:02.005364    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:02.005736    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:02 GMT
	I0514 00:17:02.005736    4316 round_trippers.go:580]     Audit-Id: 2c191887-d19b-4933-ae16-5d204480ef80
	I0514 00:17:02.005736    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:02.005736    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:02.006128    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:02.007256    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:02.499979    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:02.500312    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:02.500312    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:02.500312    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:02.504440    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:02.504489    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:02.504489    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:02.504489    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:02.504489    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:02.504581    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:02.504581    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:02 GMT
	I0514 00:17:02.504581    4316 round_trippers.go:580]     Audit-Id: abbe0b62-7463-4a66-b671-b911b205de9d
	I0514 00:17:02.504802    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:03.000811    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:03.000956    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:03.000956    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:03.000956    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:03.005771    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:03.005771    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:03.005771    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:03.006468    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:03.006598    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:03 GMT
	I0514 00:17:03.006598    4316 round_trippers.go:580]     Audit-Id: dd9e868b-ee23-4f7d-8a2b-ea95bd9c3cee
	I0514 00:17:03.006598    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:03.006598    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:03.006966    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:03.500457    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:03.500457    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:03.500457    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:03.500457    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:03.504836    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:03.504836    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:03.504836    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:03.504836    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:03.504836    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:03.504836    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:03 GMT
	I0514 00:17:03.504836    4316 round_trippers.go:580]     Audit-Id: c4a8b267-fc64-474b-b9ee-bf6bf6edf98f
	I0514 00:17:03.504836    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:03.505459    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:03.999148    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:03.999148    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:03.999148    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:03.999148    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:04.003159    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:04.003159    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:04.003159    4316 round_trippers.go:580]     Audit-Id: 8c75461a-1b8e-4d6d-b3af-79918329b9a3
	I0514 00:17:04.003159    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:04.003159    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:04.003159    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:04.003159    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:04.003159    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:04 GMT
	I0514 00:17:04.003159    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:04.498170    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:04.498170    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:04.498170    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:04.498170    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:04.502661    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:04.502661    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:04.502661    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:04.502661    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:04.502661    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:04.502661    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:04.502661    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:04 GMT
	I0514 00:17:04.502661    4316 round_trippers.go:580]     Audit-Id: 40859645-6768-4e34-9836-dc90f4e3cac3
	I0514 00:17:04.502975    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:04.503735    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:04.997006    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:04.997006    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:04.997206    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:04.997206    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:04.999954    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:04.999954    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:04.999954    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:04.999954    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:04.999954    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:05 GMT
	I0514 00:17:04.999954    4316 round_trippers.go:580]     Audit-Id: bae1c90f-f7ec-42c0-9c4f-08b8aab803ce
	I0514 00:17:04.999954    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:04.999954    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:05.000625    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:05.494904    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:05.494904    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:05.494904    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:05.494904    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:05.499039    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:05.499092    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:05.499092    4316 round_trippers.go:580]     Audit-Id: 3a8866cf-e13e-4fb2-8c89-8496bd033786
	I0514 00:17:05.499092    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:05.499092    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:05.499092    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:05.499092    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:05.499092    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:05 GMT
	I0514 00:17:05.499092    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:05.996017    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:05.996248    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:05.996248    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:05.996248    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:06.000605    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:06.000840    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:06.000840    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:06 GMT
	I0514 00:17:06.000840    4316 round_trippers.go:580]     Audit-Id: 33eef445-7b1a-4454-9aca-231e5e0096e7
	I0514 00:17:06.000840    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:06.000840    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:06.000840    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:06.000840    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:06.001010    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:06.494357    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:06.494594    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:06.494594    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:06.494594    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:06.497951    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:06.497951    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:06.497951    4316 round_trippers.go:580]     Audit-Id: c6b28072-705f-4ea0-a13c-29f6b4b6b056
	I0514 00:17:06.497951    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:06.497951    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:06.497951    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:06.497951    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:06.497951    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:06 GMT
	I0514 00:17:06.498932    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:06.998763    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:06.998862    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:06.998862    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:06.998862    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:07.001571    4316 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0514 00:17:07.001571    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:07.001571    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:07.001571    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:07.001571    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:07.001571    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:07.001571    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:07 GMT
	I0514 00:17:07.001571    4316 round_trippers.go:580]     Audit-Id: 2b1d7c25-562a-4cc3-ba93-30f5f9e5f048
	I0514 00:17:07.001571    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:07.002325    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:07.500066    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:07.500144    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:07.500144    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:07.500144    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:07.503499    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:07.503499    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:07.503499    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:07.503499    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:07.503499    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:07.503499    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:07.503499    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:07 GMT
	I0514 00:17:07.503499    4316 round_trippers.go:580]     Audit-Id: f31b5e5f-6015-42cd-8e85-3e2cdb8c97e4
	I0514 00:17:07.504356    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:08.001178    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:08.001178    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:08.001178    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:08.001178    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:08.004866    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:08.004866    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:08.004866    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:08 GMT
	I0514 00:17:08.004866    4316 round_trippers.go:580]     Audit-Id: 1a879941-aa3c-4aad-bc7a-d6adf682914d
	I0514 00:17:08.004866    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:08.004866    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:08.004866    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:08.004866    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:08.005717    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:08.497834    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:08.497834    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:08.497834    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:08.497834    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:08.501585    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:08.501585    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:08.501585    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:08.501585    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:08.501736    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:08.501736    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:08.501736    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:08 GMT
	I0514 00:17:08.501736    4316 round_trippers.go:580]     Audit-Id: 221981f1-96b4-44db-9c65-caa46decdcc5
	I0514 00:17:08.501973    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:08.997338    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:08.997426    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:08.997426    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:08.997426    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:09.004125    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:09.004125    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:09.004125    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:09.004125    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:09 GMT
	I0514 00:17:09.004125    4316 round_trippers.go:580]     Audit-Id: e466722a-540b-4937-bfb7-0c896c9ccb5b
	I0514 00:17:09.004125    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:09.004125    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:09.004125    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:09.005086    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:09.005086    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:09.501403    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:09.501403    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:09.501403    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:09.501403    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:09.507067    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:09.507067    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:09.507067    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:09.507598    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:09.507598    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:09.507598    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:09 GMT
	I0514 00:17:09.507598    4316 round_trippers.go:580]     Audit-Id: cbbdf0d0-2f6a-4e29-81c1-3c4a6efa2c46
	I0514 00:17:09.507598    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:09.507945    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:10.003691    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:10.003976    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:10.004060    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:10.004060    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:10.006954    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:10.007561    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:10.007561    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:10 GMT
	I0514 00:17:10.007561    4316 round_trippers.go:580]     Audit-Id: 229b886c-cb94-4ee4-bbe6-4bcc5bd051dd
	I0514 00:17:10.007561    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:10.007561    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:10.007561    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:10.007681    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:10.007896    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:10.501059    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:10.501059    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:10.501059    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:10.501059    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:10.504760    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:10.505683    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:10.505683    4316 round_trippers.go:580]     Audit-Id: 3b002635-5e87-4db8-85dc-c81e205c958f
	I0514 00:17:10.505683    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:10.505683    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:10.505683    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:10.505683    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:10.505683    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:10 GMT
	I0514 00:17:10.506062    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:11.003157    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:11.003545    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:11.003545    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:11.003545    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:11.011813    4316 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0514 00:17:11.011813    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:11.011813    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:11.011813    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:11.011813    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:11.011813    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:11.011813    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:11 GMT
	I0514 00:17:11.011813    4316 round_trippers.go:580]     Audit-Id: ff6a6741-0a52-4f07-8aa6-2c4bc8ff79fe
	I0514 00:17:11.011813    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:11.011813    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:11.503487    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:11.503487    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:11.503487    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:11.503565    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:11.507407    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:11.507464    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:11.507464    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:11.507464    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:11 GMT
	I0514 00:17:11.507464    4316 round_trippers.go:580]     Audit-Id: 5df9f0ec-c129-41b8-a618-215b48a6ef67
	I0514 00:17:11.507464    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:11.507464    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:11.507464    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:11.507464    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:12.005131    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:12.005131    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:12.005131    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:12.005229    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:12.008510    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:12.008729    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:12.008729    4316 round_trippers.go:580]     Audit-Id: 3d4c88a8-bd66-4282-b6c5-b345b1dde78b
	I0514 00:17:12.008729    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:12.008729    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:12.008825    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:12.008825    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:12.008825    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:12 GMT
	I0514 00:17:12.009103    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:12.502348    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:12.502348    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:12.502348    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:12.502348    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:12.509251    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:12.509251    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:12.509251    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:12 GMT
	I0514 00:17:12.509251    4316 round_trippers.go:580]     Audit-Id: 1555e88a-068d-42bd-9d7b-7a52f617e216
	I0514 00:17:12.509251    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:12.509251    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:12.509251    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:12.509251    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:12.509251    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:13.004350    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:13.004547    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:13.004547    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:13.004547    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:13.007419    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:13.008127    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:13.008127    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:13.008127    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:13 GMT
	I0514 00:17:13.008225    4316 round_trippers.go:580]     Audit-Id: 9435aa9e-e42e-4c1e-b278-8a56ff8b06be
	I0514 00:17:13.008225    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:13.008225    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:13.008225    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:13.008662    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:13.506181    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:13.506612    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:13.506612    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:13.506612    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:13.513906    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:13.513906    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:13.513906    4316 round_trippers.go:580]     Audit-Id: 1fd4b84f-bf3b-4eda-b37a-2467411fa5f8
	I0514 00:17:13.513906    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:13.513906    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:13.513906    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:13.513906    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:13.513906    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:13 GMT
	I0514 00:17:13.513906    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:13.514559    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:14.007980    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:14.008291    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:14.008291    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:14.008291    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:14.011685    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:14.012085    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:14.012085    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:14.012085    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:14.012085    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:14 GMT
	I0514 00:17:14.012085    4316 round_trippers.go:580]     Audit-Id: e1b51e40-3b3f-495a-8694-a3d7610858fd
	I0514 00:17:14.012085    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:14.012085    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:14.012584    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:14.493730    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:14.493730    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:14.493730    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:14.493730    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:14.497995    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:14.498082    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:14.498082    4316 round_trippers.go:580]     Audit-Id: 221d4596-8973-404b-ad7c-67e3e171c1c8
	I0514 00:17:14.498082    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:14.498082    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:14.498082    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:14.498082    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:14.498082    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:14 GMT
	I0514 00:17:14.498082    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:15.006074    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:15.006074    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:15.006074    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:15.006074    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:15.009748    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:15.009748    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:15.009748    4316 round_trippers.go:580]     Audit-Id: 9edf3cb0-a6a3-44c5-8b1a-2e66b38cce51
	I0514 00:17:15.009748    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:15.009748    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:15.010208    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:15.010208    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:15.010208    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:15 GMT
	I0514 00:17:15.010354    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:15.494447    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:15.494447    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:15.494522    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:15.494522    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:15.499662    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:15.500217    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:15.500217    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:15.500217    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:15.500217    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:15.500217    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:15.500217    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:15 GMT
	I0514 00:17:15.500339    4316 round_trippers.go:580]     Audit-Id: 895290ed-8eba-4c7c-94a6-b78d4dcc56bd
	I0514 00:17:15.500377    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:15.994939    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:15.995027    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:15.995027    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:15.995027    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:15.998439    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:15.998439    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:15.998439    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:15.998439    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:15.998439    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:15.998439    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:15.998439    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:16 GMT
	I0514 00:17:15.998439    4316 round_trippers.go:580]     Audit-Id: fec4758a-5cb2-45cb-adfd-b9cdc0dadde2
	I0514 00:17:16.002011    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:16.002627    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:16.496044    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:16.496307    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:16.496381    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:16.496381    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:16.505041    4316 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0514 00:17:16.505041    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:16.505226    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:16 GMT
	I0514 00:17:16.505226    4316 round_trippers.go:580]     Audit-Id: bbbf4538-dad2-42fb-8b32-36e25e0b7e24
	I0514 00:17:16.505226    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:16.505226    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:16.505226    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:16.505226    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:16.505497    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:16.995618    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:16.995618    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:16.995975    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:16.995975    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:17.002562    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:17.002562    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:17.002562    4316 round_trippers.go:580]     Audit-Id: 0e591535-ac13-4407-8ce4-b9fb09d627cb
	I0514 00:17:17.002562    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:17.002562    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:17.002562    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:17.002562    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:17.002562    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:17 GMT
	I0514 00:17:17.003179    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:17.509026    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:17.509261    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:17.509261    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:17.509261    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:17.516782    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:17.516782    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:17.516782    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:17.516782    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:17.516782    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:17 GMT
	I0514 00:17:17.516782    4316 round_trippers.go:580]     Audit-Id: 3af16802-b6ae-4f85-833a-a044cbaeac1f
	I0514 00:17:17.516782    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:17.516782    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:17.516782    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:17.994933    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:17.994933    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:17.994933    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:17.994933    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:17.999521    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:17.999758    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:17.999758    4316 round_trippers.go:580]     Audit-Id: 59135110-920e-4b83-8b73-f58b4239205c
	I0514 00:17:17.999758    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:17.999758    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:17.999758    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:17.999758    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:17.999758    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:18 GMT
	I0514 00:17:18.000575    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:18.508707    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:18.508707    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:18.508815    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:18.508815    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:18.513115    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:18.513115    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:18.513115    4316 round_trippers.go:580]     Audit-Id: 479205e8-3c96-4338-82da-5e2ece09b2a9
	I0514 00:17:18.513115    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:18.513115    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:18.513223    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:18.513223    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:18.513223    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:18 GMT
	I0514 00:17:18.513223    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:18.513915    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:19.007141    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:19.007141    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:19.007141    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:19.007141    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:19.010735    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:19.010735    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:19.010735    4316 round_trippers.go:580]     Audit-Id: f7500649-d19f-478b-9507-76341986dee8
	I0514 00:17:19.010735    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:19.010735    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:19.010735    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:19.010735    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:19.010735    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:19 GMT
	I0514 00:17:19.011373    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:19.504284    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:19.504284    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:19.504284    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:19.504284    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:19.509050    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:19.509050    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:19.509050    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:19.509050    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:19 GMT
	I0514 00:17:19.509050    4316 round_trippers.go:580]     Audit-Id: e0317e84-fa61-483d-bf47-278b5128a9ad
	I0514 00:17:19.509050    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:19.509050    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:19.509050    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:19.509050    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:20.004793    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:20.004793    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:20.004882    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:20.004882    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:20.011393    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:20.011393    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:20.011393    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:20.011393    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:20.011393    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:20.011393    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:20 GMT
	I0514 00:17:20.011393    4316 round_trippers.go:580]     Audit-Id: e03104a6-fe19-4c40-926b-af0b58a3371f
	I0514 00:17:20.011393    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:20.012090    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:20.503277    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:20.503277    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:20.503357    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:20.503357    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:20.507687    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:20.507687    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:20.508195    4316 round_trippers.go:580]     Audit-Id: 81721e37-febf-417c-91d2-b94ae71958df
	I0514 00:17:20.508195    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:20.508195    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:20.508195    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:20.508195    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:20.508195    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:20 GMT
	I0514 00:17:20.508737    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:21.003439    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:21.003439    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:21.003576    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:21.003576    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:21.006737    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:21.006737    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:21.006737    4316 round_trippers.go:580]     Audit-Id: eb73f551-8506-4cbc-a46a-194448e260a7
	I0514 00:17:21.006737    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:21.006737    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:21.006737    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:21.006737    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:21.006737    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:21 GMT
	I0514 00:17:21.008340    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:21.008788    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:21.502201    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:21.502276    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:21.502276    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:21.502347    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:21.506139    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:21.506139    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:21.506139    4316 round_trippers.go:580]     Audit-Id: 69973cd4-3fc9-4861-8dfc-cbffa11d7466
	I0514 00:17:21.506139    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:21.506139    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:21.506139    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:21.506139    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:21.506139    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:21 GMT
	I0514 00:17:21.506139    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:22.001877    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:22.001877    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:22.001877    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:22.002176    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:22.005781    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:22.005781    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:22.005781    4316 round_trippers.go:580]     Audit-Id: 34ccf271-3d80-422f-83f3-a1097ded2732
	I0514 00:17:22.005781    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:22.005781    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:22.005781    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:22.005781    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:22.005781    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:22 GMT
	I0514 00:17:22.006653    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:22.503919    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:22.503919    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:22.503919    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:22.503919    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:22.508448    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:22.508448    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:22.508918    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:22 GMT
	I0514 00:17:22.508918    4316 round_trippers.go:580]     Audit-Id: 7ec04073-2193-4304-82b6-63ac74c95951
	I0514 00:17:22.508918    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:22.508918    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:22.508918    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:22.508918    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:22.509337    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:23.002506    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:23.002506    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:23.002506    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:23.002506    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:23.006672    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:23.006672    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:23.006672    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:23 GMT
	I0514 00:17:23.006672    4316 round_trippers.go:580]     Audit-Id: 4868224d-54d0-4d4a-a3f6-fb1a956fb101
	I0514 00:17:23.006672    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:23.006672    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:23.006672    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:23.006672    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:23.006672    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:23.499973    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:23.500188    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:23.500188    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:23.500188    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:23.506468    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:23.506468    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:23.506468    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:23.506468    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:23 GMT
	I0514 00:17:23.506468    4316 round_trippers.go:580]     Audit-Id: f2404b85-a611-4dd6-a0d1-e06e2c446b8f
	I0514 00:17:23.506468    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:23.506468    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:23.506468    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:23.506468    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:23.507187    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:24.000785    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:24.000785    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:24.000785    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:24.000871    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:24.006782    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:24.006782    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:24.006782    4316 round_trippers.go:580]     Audit-Id: e2dc4502-bf0b-4588-b95d-5022699196e6
	I0514 00:17:24.006782    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:24.006782    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:24.006782    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:24.006782    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:24.006782    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:24 GMT
	I0514 00:17:24.007754    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:24.500261    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:24.500261    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:24.500261    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:24.500261    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:24.503843    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:24.503843    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:24.503843    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:24.503843    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:24.503843    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:24.503843    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:24.503843    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:24 GMT
	I0514 00:17:24.503843    4316 round_trippers.go:580]     Audit-Id: cc556569-cefc-4af6-96ee-35e43cde5d74
	I0514 00:17:24.504062    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:24.998350    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:24.998411    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:24.998411    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:24.998411    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:25.003124    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:25.003124    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:25.003124    4316 round_trippers.go:580]     Audit-Id: 2a73aaae-2a39-4ddf-9ab8-9d341270bae3
	I0514 00:17:25.003124    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:25.003124    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:25.003124    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:25.003124    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:25.003124    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:25 GMT
	I0514 00:17:25.004765    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:25.497396    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:25.497396    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:25.497396    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:25.497396    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:25.501304    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:25.501387    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:25.501387    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:25.501471    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:25.501524    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:25.501524    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:25.501524    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:25 GMT
	I0514 00:17:25.501524    4316 round_trippers.go:580]     Audit-Id: 5769d55b-baf5-46cf-8dc5-210854181aaf
	I0514 00:17:25.501524    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:26.001833    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:26.001975    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:26.001975    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:26.001975    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:26.011931    4316 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0514 00:17:26.011931    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:26.012675    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:26 GMT
	I0514 00:17:26.012675    4316 round_trippers.go:580]     Audit-Id: fdf514e0-2c25-4751-9a09-7b9df168026b
	I0514 00:17:26.012675    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:26.012675    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:26.012675    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:26.012675    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:26.013075    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:26.013920    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:26.499505    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:26.499505    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:26.499505    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:26.499505    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:26.503545    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:26.503545    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:26.504100    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:26.504100    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:26.504100    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:26 GMT
	I0514 00:17:26.504100    4316 round_trippers.go:580]     Audit-Id: 05d7835b-eac2-482b-b4e1-38dd2971ad48
	I0514 00:17:26.504100    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:26.504100    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:26.504503    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:26.996069    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:26.996069    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:26.996069    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:26.996176    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:27.004041    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:27.004093    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:27.004093    4316 round_trippers.go:580]     Audit-Id: 5742a2a6-01c0-4532-b55e-e43532408f92
	I0514 00:17:27.004093    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:27.004093    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:27.004093    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:27.004093    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:27.004093    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:27 GMT
	I0514 00:17:27.004093    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:27.497694    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:27.497797    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:27.497797    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:27.497797    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:27.501582    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:27.501582    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:27.501582    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:27 GMT
	I0514 00:17:27.501582    4316 round_trippers.go:580]     Audit-Id: 53ea86d1-ff2c-4219-955f-69164e50ba12
	I0514 00:17:27.501582    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:27.501582    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:27.501582    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:27.501582    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:27.502337    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:27.999944    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:27.999944    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:27.999944    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:27.999944    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:28.003969    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:28.003969    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:28.003969    4316 round_trippers.go:580]     Audit-Id: aedd3117-9185-42be-9ca5-dbf34ac0accd
	I0514 00:17:28.003969    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:28.003969    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:28.003969    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:28.003969    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:28.003969    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:28 GMT
	I0514 00:17:28.003969    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:28.499091    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:28.499452    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:28.499452    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:28.499570    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:28.503819    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:28.504123    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:28.504123    4316 round_trippers.go:580]     Audit-Id: 7b49abca-0ab8-4030-aa00-e3f6805f999f
	I0514 00:17:28.504123    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:28.504123    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:28.504220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:28.504220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:28.504220    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:28 GMT
	I0514 00:17:28.504584    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:28.505296    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:29.001123    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:29.001207    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:29.001207    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:29.001207    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:29.004425    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:29.004516    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:29.004516    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:29.004572    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:29 GMT
	I0514 00:17:29.004572    4316 round_trippers.go:580]     Audit-Id: 636522da-423f-4faf-8a08-900f96456c85
	I0514 00:17:29.004572    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:29.004572    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:29.004572    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:29.004822    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:29.500246    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:29.500456    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:29.500456    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:29.500456    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:29.504291    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:29.504291    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:29.504291    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:29.505248    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:29.505248    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:29.505248    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:29 GMT
	I0514 00:17:29.505248    4316 round_trippers.go:580]     Audit-Id: 733c4806-c1a8-4d11-9869-0dd64cb02ba6
	I0514 00:17:29.505248    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:29.505529    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:30.001139    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:30.001139    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:30.001313    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:30.001313    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:30.004627    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:30.004627    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:30.004627    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:30.005200    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:30.005200    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:30.005200    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:30.005200    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:30 GMT
	I0514 00:17:30.005200    4316 round_trippers.go:580]     Audit-Id: a5b4917c-6581-4585-ae74-a2192e69031c
	I0514 00:17:30.005515    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:30.503983    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:30.503983    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:30.503983    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:30.503983    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:30.509404    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:30.509404    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:30.509404    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:30 GMT
	I0514 00:17:30.509404    4316 round_trippers.go:580]     Audit-Id: de04191c-85cf-4518-9bd0-eaa1e90c242f
	I0514 00:17:30.509404    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:30.509404    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:30.509404    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:30.509404    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:30.510025    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:30.510682    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:30.999641    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:30.999641    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:30.999711    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:30.999711    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:31.008118    4316 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0514 00:17:31.008118    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:31.008118    4316 round_trippers.go:580]     Audit-Id: 4c862920-74d3-49f8-895a-cd8b9284790c
	I0514 00:17:31.008118    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:31.008118    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:31.008118    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:31.008118    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:31.008118    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:31 GMT
	I0514 00:17:31.008723    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:31.498866    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:31.499278    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:31.499278    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:31.499278    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:31.502529    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:31.503424    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:31.503424    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:31.503424    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:31.503424    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:31.503424    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:31.503424    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:31 GMT
	I0514 00:17:31.503424    4316 round_trippers.go:580]     Audit-Id: 214c2a75-2eae-45ff-b9a2-5f4d734eb068
	I0514 00:17:31.503590    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:31.994958    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:31.994958    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:31.994958    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:31.994958    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:31.998566    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:31.998863    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:31.998863    4316 round_trippers.go:580]     Audit-Id: 3da4648a-0265-4e07-8163-f148c8e88582
	I0514 00:17:31.998863    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:31.998863    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:31.998863    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:31.998863    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:31.998863    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:32 GMT
	I0514 00:17:31.998863    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:32.496175    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:32.496426    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:32.496426    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:32.496426    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:32.500299    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:32.500421    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:32.500421    4316 round_trippers.go:580]     Audit-Id: b96ce126-e0bf-43af-94c9-43590bcbbbfe
	I0514 00:17:32.500421    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:32.500421    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:32.500483    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:32.500483    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:32.500483    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:32 GMT
	I0514 00:17:32.500848    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:33.007423    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:33.007423    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:33.007423    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:33.007423    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:33.011877    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:33.011944    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:33.011944    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:33.011944    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:33.011944    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:33 GMT
	I0514 00:17:33.012013    4316 round_trippers.go:580]     Audit-Id: 2a7f09d6-d531-46b8-8b5e-1c7aac6609b0
	I0514 00:17:33.012013    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:33.012013    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:33.012261    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:33.013029    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:33.494994    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:33.495065    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:33.495136    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:33.495136    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:33.501513    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:33.501513    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:33.501513    4316 round_trippers.go:580]     Audit-Id: 69879bba-0b15-412c-b66e-4aef596b2aa1
	I0514 00:17:33.501513    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:33.501513    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:33.501513    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:33.501513    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:33.501513    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:33 GMT
	I0514 00:17:33.502208    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:34.007456    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:34.007524    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:34.007524    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:34.007601    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:34.010993    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:34.011466    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:34.011466    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:34.011466    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:34.011466    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:34.011466    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:34 GMT
	I0514 00:17:34.011466    4316 round_trippers.go:580]     Audit-Id: 0f7fa905-ee1d-40a4-8867-1d7b5cd76008
	I0514 00:17:34.011466    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:34.011703    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:34.507624    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:34.507624    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:34.507624    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:34.507624    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:34.511372    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:34.511536    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:34.511536    4316 round_trippers.go:580]     Audit-Id: a370dbda-fc73-4e5a-ab10-80e2e98429fb
	I0514 00:17:34.511536    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:34.511536    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:34.511536    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:34.511536    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:34.511536    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:34 GMT
	I0514 00:17:34.511691    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:35.003518    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:35.003518    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:35.003518    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:35.003610    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:35.010064    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:35.010064    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:35.010064    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:35.010064    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:35 GMT
	I0514 00:17:35.010064    4316 round_trippers.go:580]     Audit-Id: 18f3cc11-c3fb-4dc2-96df-ca23aaca2693
	I0514 00:17:35.010064    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:35.010064    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:35.010064    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:35.010064    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:35.503362    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:35.503429    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:35.503495    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:35.503495    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:35.507652    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:35.507652    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:35.507652    4316 round_trippers.go:580]     Audit-Id: 2ea09c9b-84e4-4471-b122-e78e9530a22c
	I0514 00:17:35.507652    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:35.508420    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:35.508420    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:35.508420    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:35.508420    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:35 GMT
	I0514 00:17:35.508691    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:35.509244    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:35.997864    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:35.997864    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:35.997928    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:35.997928    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.004603    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:36.004603    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.004603    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.004603    4316 round_trippers.go:580]     Audit-Id: 96e1d5fc-daa4-45b2-bae7-f98520d72724
	I0514 00:17:36.004603    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.004603    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.004603    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.004603    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.004603    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:36.005856    4316 node_ready.go:49] node "multinode-101100" has status "Ready":"True"
	I0514 00:17:36.005906    4316 node_ready.go:38] duration metric: took 36.0113743s for node "multinode-101100" to be "Ready" ...
	I0514 00:17:36.005958    4316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:17:36.006019    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:17:36.006019    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:36.006019    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:36.006019    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.010618    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:36.010618    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.010618    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.010618    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.010618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.010618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.010618    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.010618    4316 round_trippers.go:580]     Audit-Id: 068cc6f3-d6ce-4793-a62e-e203dc47caf3
	I0514 00:17:36.012984    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1826"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87076 chars]
	I0514 00:17:36.016931    4316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:17:36.017061    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:36.017061    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:36.017061    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.017126    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:36.019703    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:36.019703    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.019703    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.019703    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.019703    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.019703    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.019703    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.019703    4316 round_trippers.go:580]     Audit-Id: 3b7203b0-9e6c-4a40-ae60-0c1565d9d0ae
	I0514 00:17:36.020697    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:36.021315    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:36.021315    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:36.021315    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:36.021372    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.023642    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:36.023642    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.023642    4316 round_trippers.go:580]     Audit-Id: 3668b121-ba24-4002-9e03-a51fb3200ba1
	I0514 00:17:36.023642    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.023642    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.023642    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.023642    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.023642    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.023642    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:36.527892    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:36.527892    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:36.527892    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:36.527892    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.531511    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:36.531662    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.531662    4316 round_trippers.go:580]     Audit-Id: 5f214ea3-d2b8-4129-9aed-5c6f5eba1019
	I0514 00:17:36.531662    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.531662    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.531662    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.531662    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.531662    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.531750    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:36.532464    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:36.532464    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:36.532464    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:36.532464    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.537113    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:36.537113    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.537113    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.537113    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.537113    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.537113    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.537113    4316 round_trippers.go:580]     Audit-Id: 14adb0c8-9869-49f9-a0c9-c319876164f8
	I0514 00:17:36.537113    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.537652    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:37.027356    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:37.027608    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:37.027686    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:37.027686    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:37.031944    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:37.031944    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:37.031944    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:37.031944    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:37.031944    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:37 GMT
	I0514 00:17:37.031944    4316 round_trippers.go:580]     Audit-Id: 44431413-c014-4d24-9cdf-ab2569815d98
	I0514 00:17:37.032079    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:37.032079    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:37.032502    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:37.033829    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:37.033903    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:37.033903    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:37.033903    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:37.036639    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:37.036935    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:37.036935    4316 round_trippers.go:580]     Audit-Id: 0350951a-02b2-4fd2-b3bb-08335425abee
	I0514 00:17:37.036935    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:37.036935    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:37.036935    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:37.036935    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:37.036935    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:37 GMT
	I0514 00:17:37.037344    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:37.527783    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:37.527865    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:37.527865    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:37.527865    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:37.531713    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:37.531713    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:37.531713    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:37.531713    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:37 GMT
	I0514 00:17:37.531713    4316 round_trippers.go:580]     Audit-Id: 1fafa712-1225-4491-973c-42c8fc84a4b1
	I0514 00:17:37.531713    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:37.531713    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:37.531713    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:37.531713    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:37.532869    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:37.532869    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:37.532869    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:37.532869    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:37.535419    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:37.535419    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:37.535419    4316 round_trippers.go:580]     Audit-Id: 7157764a-c376-403e-bcd1-3f311b4bb645
	I0514 00:17:37.535419    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:37.535419    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:37.535419    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:37.535419    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:37.535968    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:37 GMT
	I0514 00:17:37.536223    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:38.024639    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:38.024694    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:38.024726    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:38.024726    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:38.028832    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:38.028832    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:38.028832    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:38 GMT
	I0514 00:17:38.028832    4316 round_trippers.go:580]     Audit-Id: 36a59dd7-5add-4132-ab14-00a074e5e56f
	I0514 00:17:38.028832    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:38.028832    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:38.028832    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:38.028832    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:38.028832    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:38.029690    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:38.029690    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:38.029690    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:38.029753    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:38.032797    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:38.032853    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:38.032853    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:38.032853    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:38.032853    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:38.032853    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:38 GMT
	I0514 00:17:38.032853    4316 round_trippers.go:580]     Audit-Id: a9a5cfc2-2338-4326-9faa-bf514c981ab8
	I0514 00:17:38.032853    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:38.032853    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:38.033553    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:38.524972    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:38.524972    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:38.525270    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:38.525270    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:38.529619    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:38.530439    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:38.530439    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:38 GMT
	I0514 00:17:38.530439    4316 round_trippers.go:580]     Audit-Id: feece2ad-cb70-476d-9b9a-d39e71a5f295
	I0514 00:17:38.530439    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:38.530439    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:38.530439    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:38.530439    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:38.530991    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:38.532104    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:38.532104    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:38.532104    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:38.532189    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:38.535396    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:38.535396    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:38.535396    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:38.535396    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:38 GMT
	I0514 00:17:38.535617    4316 round_trippers.go:580]     Audit-Id: 1bde8587-06b0-4adc-bfcc-1f6819def8cc
	I0514 00:17:38.535723    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:38.535723    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:38.535723    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:38.536119    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:39.026266    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:39.026266    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:39.026266    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:39.026266    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:39.030183    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:39.030183    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:39.030183    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:39.030183    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:39.030183    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:39.030183    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:39 GMT
	I0514 00:17:39.030183    4316 round_trippers.go:580]     Audit-Id: e5255604-5444-4f2b-a83e-7b747f867314
	I0514 00:17:39.030183    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:39.030183    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:39.031075    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:39.031151    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:39.031151    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:39.031151    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:39.033360    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:39.034295    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:39.034295    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:39.034295    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:39.034295    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:39.034295    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:39 GMT
	I0514 00:17:39.034295    4316 round_trippers.go:580]     Audit-Id: 6caf351d-6630-435c-ad7d-84d2810267f4
	I0514 00:17:39.034295    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:39.035361    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:39.522968    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:39.522968    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:39.522968    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:39.522968    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:39.528757    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:39.528757    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:39.528757    4316 round_trippers.go:580]     Audit-Id: 19e650ad-0907-4c36-b29b-8a1199516028
	I0514 00:17:39.528757    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:39.528757    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:39.528757    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:39.528757    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:39.528757    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:39 GMT
	I0514 00:17:39.528757    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:39.529969    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:39.530022    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:39.530070    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:39.530070    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:39.532876    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:39.532876    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:39.532876    4316 round_trippers.go:580]     Audit-Id: 6d2e56a7-9bac-4803-99e8-e1c49670b829
	I0514 00:17:39.532876    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:39.532876    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:39.532876    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:39.532876    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:39.532876    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:39 GMT
	I0514 00:17:39.534036    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:40.026994    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:40.027078    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:40.027078    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:40.027078    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:40.030390    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:40.030390    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:40.030390    4316 round_trippers.go:580]     Audit-Id: 658b85ed-ea30-4dd0-94ff-8006fae55e98
	I0514 00:17:40.030390    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:40.030390    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:40.030390    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:40.030390    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:40.030390    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:40 GMT
	I0514 00:17:40.031146    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:40.032140    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:40.032140    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:40.032249    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:40.032249    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:40.035718    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:40.035718    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:40.035718    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:40.035718    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:40.035718    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:40.035718    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:40.035718    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:40 GMT
	I0514 00:17:40.035718    4316 round_trippers.go:580]     Audit-Id: be2e52f6-a7a5-4357-8179-6c5b3aa5e955
	I0514 00:17:40.035718    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:40.035718    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:40.526870    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:40.527181    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:40.527181    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:40.527181    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:40.530385    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:40.530385    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:40.530385    4316 round_trippers.go:580]     Audit-Id: b7d7852c-b5e2-4e7b-8ac0-445fc8ec8aa8
	I0514 00:17:40.530385    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:40.530385    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:40.530385    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:40.530385    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:40.530385    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:40 GMT
	I0514 00:17:40.531484    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:40.531652    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:40.531652    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:40.532180    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:40.532180    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:40.539969    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:40.539969    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:40.539969    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:40.539969    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:40 GMT
	I0514 00:17:40.539969    4316 round_trippers.go:580]     Audit-Id: 51f8fb00-260e-472c-b6ce-cebcd4657b9f
	I0514 00:17:40.539969    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:40.539969    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:40.539969    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:40.539969    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:41.027667    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:41.027976    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:41.027976    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:41.027976    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:41.034298    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:41.034590    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:41.034590    4316 round_trippers.go:580]     Audit-Id: 6e465e73-5fde-4b03-a6ef-8a76d9d0a0ea
	I0514 00:17:41.034590    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:41.034590    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:41.034590    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:41.034590    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:41.034590    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:41 GMT
	I0514 00:17:41.034816    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:41.035391    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:41.035490    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:41.035490    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:41.035490    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:41.038660    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:41.038660    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:41.038815    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:41.038815    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:41.038815    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:41 GMT
	I0514 00:17:41.038815    4316 round_trippers.go:580]     Audit-Id: fc8e2b6c-50ab-4e41-931a-b5e3591a74fe
	I0514 00:17:41.038815    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:41.038815    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:41.039216    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:41.527583    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:41.527583    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:41.527583    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:41.527583    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:41.531258    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:41.531258    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:41.531258    4316 round_trippers.go:580]     Audit-Id: 43de37d2-42d4-49ed-b700-dbc652fc88df
	I0514 00:17:41.531258    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:41.531258    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:41.531258    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:41.531258    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:41.531258    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:41 GMT
	I0514 00:17:41.532017    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:41.533115    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:41.533115    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:41.533193    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:41.533193    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:41.538360    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:41.538360    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:41.538360    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:41.538360    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:41.538360    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:41.538360    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:41.538360    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:41 GMT
	I0514 00:17:41.538360    4316 round_trippers.go:580]     Audit-Id: 177c2e4f-ef92-4c7d-af2c-f0740ad67947
	I0514 00:17:41.539542    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:42.024220    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:42.024441    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:42.024441    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:42.024796    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:42.030251    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:42.030251    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:42.030251    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:42.030251    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:42.030251    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:42.030251    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:42 GMT
	I0514 00:17:42.030251    4316 round_trippers.go:580]     Audit-Id: ba56d365-23e5-43cf-b1bb-d17c09b50685
	I0514 00:17:42.030251    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:42.030901    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:42.032706    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:42.032706    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:42.032706    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:42.032706    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:42.035288    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:42.035288    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:42.035288    4316 round_trippers.go:580]     Audit-Id: 62a789cc-ae0c-46fd-a0c3-6620dabddcbd
	I0514 00:17:42.035288    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:42.035288    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:42.035288    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:42.035288    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:42.035288    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:42 GMT
	I0514 00:17:42.036203    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:42.036629    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:42.523370    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:42.523370    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:42.523370    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:42.523370    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:42.527812    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:42.527812    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:42.527812    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:42.527812    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:42.527812    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:42.527812    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:42 GMT
	I0514 00:17:42.527812    4316 round_trippers.go:580]     Audit-Id: 50de682e-f460-4d63-adb4-c8e0fa22b8dd
	I0514 00:17:42.527812    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:42.527812    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:42.529169    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:42.529222    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:42.529222    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:42.529222    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:42.531409    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:42.532238    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:42.532238    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:42.532238    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:42 GMT
	I0514 00:17:42.532238    4316 round_trippers.go:580]     Audit-Id: 32228d06-fbcd-42e6-9a2b-36415764043b
	I0514 00:17:42.532238    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:42.532238    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:42.532238    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:42.532439    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:43.023658    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:43.023658    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:43.023658    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:43.023658    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:43.027316    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:43.028364    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:43.028364    4316 round_trippers.go:580]     Audit-Id: cac3d502-1a69-41ef-a6ae-83a4149eec8a
	I0514 00:17:43.028438    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:43.028438    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:43.028438    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:43.028438    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:43.028438    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:43 GMT
	I0514 00:17:43.028841    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:43.029541    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:43.029541    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:43.029541    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:43.029541    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:43.035874    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:43.035874    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:43.035874    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:43.035874    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:43.035874    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:43.035874    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:43 GMT
	I0514 00:17:43.035874    4316 round_trippers.go:580]     Audit-Id: 7d057147-eeb6-49ea-8529-1c1f0753a3b5
	I0514 00:17:43.035874    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:43.035874    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:43.522358    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:43.522358    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:43.522358    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:43.522358    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:43.525993    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:43.525993    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:43.525993    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:43.526544    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:43 GMT
	I0514 00:17:43.526544    4316 round_trippers.go:580]     Audit-Id: 0c69215c-1ad9-413a-8a0b-c4570170e003
	I0514 00:17:43.526544    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:43.526544    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:43.526544    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:43.526809    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:43.527500    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:43.527500    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:43.527580    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:43.527580    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:43.530572    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:43.530572    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:43.530572    4316 round_trippers.go:580]     Audit-Id: b845f9d0-8394-4e1a-aa59-46a48ee02697
	I0514 00:17:43.530572    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:43.530572    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:43.530572    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:43.530572    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:43.530572    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:43 GMT
	I0514 00:17:43.530572    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:44.021694    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:44.021817    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:44.021817    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:44.021817    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:44.024993    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:44.024993    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:44.024993    4316 round_trippers.go:580]     Audit-Id: 1fa03107-1dbf-4e37-a0de-66594064883e
	I0514 00:17:44.024993    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:44.024993    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:44.024993    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:44.024993    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:44.024993    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:44 GMT
	I0514 00:17:44.025616    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:44.026464    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:44.026551    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:44.026551    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:44.026551    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:44.028699    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:44.028699    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:44.028699    4316 round_trippers.go:580]     Audit-Id: 3630a112-874d-446b-8f06-8b3bce332d82
	I0514 00:17:44.028699    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:44.028699    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:44.029256    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:44.029256    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:44.029256    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:44 GMT
	I0514 00:17:44.029528    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:44.519859    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:44.519963    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:44.519963    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:44.519963    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:44.523665    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:44.523665    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:44.523665    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:44.523777    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:44.523777    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:44.523777    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:44 GMT
	I0514 00:17:44.523777    4316 round_trippers.go:580]     Audit-Id: 77ecb428-4c35-4383-9252-5aa64bc134d0
	I0514 00:17:44.523777    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:44.524048    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:44.525059    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:44.525059    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:44.525059    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:44.525132    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:44.531638    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:44.531638    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:44.531638    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:44.531638    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:44.531638    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:44.531638    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:44 GMT
	I0514 00:17:44.531638    4316 round_trippers.go:580]     Audit-Id: 05725037-a9aa-4e85-a952-e155e9475017
	I0514 00:17:44.531638    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:44.532165    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:44.532348    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:45.018533    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:45.018533    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:45.018533    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:45.018533    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:45.025787    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:45.025787    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:45.025787    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:45.025787    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:45.025787    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:45 GMT
	I0514 00:17:45.025787    4316 round_trippers.go:580]     Audit-Id: 51a76f65-7d7e-40a8-96ed-3bca41035f4a
	I0514 00:17:45.025787    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:45.025787    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:45.026332    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:45.027299    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:45.027299    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:45.027299    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:45.027299    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:45.029876    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:45.029876    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:45.029876    4316 round_trippers.go:580]     Audit-Id: 37f5dd10-668a-415b-9c60-819edb09b861
	I0514 00:17:45.029876    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:45.029876    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:45.030819    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:45.030819    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:45.030819    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:45 GMT
	I0514 00:17:45.031098    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:45.531320    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:45.531427    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:45.531427    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:45.531427    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:45.534320    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:45.534320    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:45.534320    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:45.534320    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:45 GMT
	I0514 00:17:45.534320    4316 round_trippers.go:580]     Audit-Id: 6230fcab-0584-4536-b8d6-e27e3a0859ce
	I0514 00:17:45.534320    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:45.534320    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:45.534320    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:45.534320    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:45.536830    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:45.536830    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:45.536830    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:45.536830    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:45.539658    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:45.539658    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:45.539658    4316 round_trippers.go:580]     Audit-Id: 2cab55b6-3170-428b-8848-1b08d5116ca2
	I0514 00:17:45.539658    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:45.539658    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:45.539658    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:45.539658    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:45.539658    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:45 GMT
	I0514 00:17:45.541017    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:46.029355    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:46.029355    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:46.029355    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:46.029355    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:46.033948    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:46.033948    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:46.033948    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:46.033948    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:46.033948    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:46.033948    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:46.033948    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:46 GMT
	I0514 00:17:46.033948    4316 round_trippers.go:580]     Audit-Id: 4ac9e535-bdef-4c29-81e3-32122b13d977
	I0514 00:17:46.034318    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:46.034927    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:46.035036    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:46.035036    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:46.035036    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:46.037370    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:46.038371    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:46.038371    4316 round_trippers.go:580]     Audit-Id: fee4d1d2-aac3-4605-8a65-3a60ded8c698
	I0514 00:17:46.038371    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:46.038453    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:46.038453    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:46.038453    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:46.038453    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:46 GMT
	I0514 00:17:46.038578    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:46.530963    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:46.531194    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:46.531194    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:46.531194    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:46.535661    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:46.535661    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:46.535661    4316 round_trippers.go:580]     Audit-Id: 7252731b-509f-4a5b-b48a-3d9d9645275b
	I0514 00:17:46.535661    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:46.535661    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:46.535661    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:46.535661    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:46.535661    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:46 GMT
	I0514 00:17:46.535661    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:46.536536    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:46.536536    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:46.536536    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:46.536536    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:46.543062    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:46.543062    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:46.543062    4316 round_trippers.go:580]     Audit-Id: d57ff020-e9f7-4f53-874e-5f0bbb759d3b
	I0514 00:17:46.543062    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:46.543062    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:46.543062    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:46.543062    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:46.543062    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:46 GMT
	I0514 00:17:46.543062    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:46.543705    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:47.028493    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:47.028739    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:47.028739    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:47.028739    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:47.031945    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:47.031945    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:47.031945    4316 round_trippers.go:580]     Audit-Id: 0a9d3b0d-2bc0-43b6-b14f-f2032c54e4c6
	I0514 00:17:47.031945    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:47.032859    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:47.032859    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:47.032859    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:47.032859    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:47 GMT
	I0514 00:17:47.036995    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:47.038025    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:47.038025    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:47.038025    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:47.038025    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:47.040443    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:47.040443    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:47.040443    4316 round_trippers.go:580]     Audit-Id: 5c129fa9-e0d3-4193-a3d7-729410f27adf
	I0514 00:17:47.040443    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:47.040443    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:47.040443    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:47.040443    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:47.040443    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:47 GMT
	I0514 00:17:47.041351    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:47.527120    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:47.527120    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:47.527120    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:47.527120    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:47.532151    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:47.532151    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:47.532151    4316 round_trippers.go:580]     Audit-Id: dd0cabbf-1337-46d3-b794-9135acdd220a
	I0514 00:17:47.532151    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:47.532151    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:47.532151    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:47.532151    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:47.532151    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:47 GMT
	I0514 00:17:47.532938    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:47.534042    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:47.534134    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:47.534134    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:47.534134    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:47.536979    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:47.536979    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:47.536979    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:47 GMT
	I0514 00:17:47.536979    4316 round_trippers.go:580]     Audit-Id: 37c130d7-d5d6-4918-af2d-da47f93de7bd
	I0514 00:17:47.536979    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:47.536979    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:47.536979    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:47.536979    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:47.537619    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:48.025944    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:48.025944    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:48.026028    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:48.026028    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:48.030327    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:48.030618    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:48.030618    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:48 GMT
	I0514 00:17:48.030618    4316 round_trippers.go:580]     Audit-Id: 5c0285b7-949f-4a03-90b7-9fba17059dbe
	I0514 00:17:48.030618    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:48.030618    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:48.030618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:48.030618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:48.030618    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:48.031224    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:48.031224    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:48.031224    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:48.031224    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:48.036454    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:48.036535    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:48.036535    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:48.036664    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:48.036664    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:48.036664    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:48 GMT
	I0514 00:17:48.036664    4316 round_trippers.go:580]     Audit-Id: d0554898-18a1-4ab4-8efe-68db0b53637e
	I0514 00:17:48.036664    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:48.036664    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:48.524363    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:48.524441    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:48.524441    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:48.524441    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:48.527732    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:48.527840    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:48.527840    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:48 GMT
	I0514 00:17:48.527840    4316 round_trippers.go:580]     Audit-Id: aa1513ea-3bd5-44ca-83df-a4cc0909b7e5
	I0514 00:17:48.527840    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:48.527840    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:48.527840    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:48.527840    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:48.528050    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:48.528687    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:48.528687    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:48.528775    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:48.528775    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:48.530906    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:48.530906    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:48.530906    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:48.530906    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:48.530906    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:48.530906    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:48 GMT
	I0514 00:17:48.530906    4316 round_trippers.go:580]     Audit-Id: 018f832b-ed87-4a1e-9c55-c079465cbe8c
	I0514 00:17:48.530906    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:48.532505    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:49.026265    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:49.026265    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:49.026265    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:49.026265    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:49.029715    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:49.029715    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:49.029715    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:49.029715    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:49 GMT
	I0514 00:17:49.029715    4316 round_trippers.go:580]     Audit-Id: 642aa73d-4189-4abe-b133-39a86a797e34
	I0514 00:17:49.029715    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:49.029715    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:49.030587    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:49.030827    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:49.031840    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:49.031924    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:49.031924    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:49.031924    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:49.034655    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:49.035513    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:49.035513    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:49.035513    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:49.035513    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:49 GMT
	I0514 00:17:49.035513    4316 round_trippers.go:580]     Audit-Id: da6c4825-b264-463b-bb57-9cb4029bc1d4
	I0514 00:17:49.035513    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:49.035513    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:49.035513    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:49.036655    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:49.523250    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:49.523250    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:49.523250    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:49.523250    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:49.526943    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:49.526943    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:49.526943    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:49 GMT
	I0514 00:17:49.526943    4316 round_trippers.go:580]     Audit-Id: 9deea6e8-5108-4a83-af6e-0ecbffbef704
	I0514 00:17:49.526943    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:49.526943    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:49.526943    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:49.526943    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:49.527537    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:49.528580    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:49.528580    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:49.528698    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:49.528698    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:49.531027    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:49.531027    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:49.531027    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:49.531027    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:49 GMT
	I0514 00:17:49.531027    4316 round_trippers.go:580]     Audit-Id: 0aa5b480-c8d6-4e43-8e08-e5d6df13934a
	I0514 00:17:49.531027    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:49.531027    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:49.531027    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:49.531845    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:50.025361    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:50.025361    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:50.025472    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:50.025472    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:50.029062    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:50.029062    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:50.029062    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:50.029062    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:50.029062    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:50.029062    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:50 GMT
	I0514 00:17:50.029062    4316 round_trippers.go:580]     Audit-Id: 056292cc-f7c7-44b1-975f-a9ab1dc1c8d3
	I0514 00:17:50.029062    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:50.029471    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:50.030226    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:50.030226    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:50.030226    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:50.030226    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:50.032516    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:50.032516    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:50.032516    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:50.032516    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:50.032516    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:50.032516    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:50 GMT
	I0514 00:17:50.032516    4316 round_trippers.go:580]     Audit-Id: b49ea68b-ba4a-44dd-9f09-77598f4a3550
	I0514 00:17:50.032516    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:50.033507    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:50.527792    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:50.528094    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:50.528094    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:50.528094    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:50.533474    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:50.533474    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:50.533474    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:50.533474    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:50.533474    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:50.533474    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:50 GMT
	I0514 00:17:50.533474    4316 round_trippers.go:580]     Audit-Id: edb67c8b-4091-4fc6-b7a3-4ab0b3702afe
	I0514 00:17:50.533474    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:50.534010    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:50.534687    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:50.534687    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:50.534687    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:50.534687    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:50.536872    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:50.536872    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:50.536872    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:50.536872    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:50 GMT
	I0514 00:17:50.537786    4316 round_trippers.go:580]     Audit-Id: b2c24719-a758-4715-b0a1-6ad73d86d33f
	I0514 00:17:50.537786    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:50.537786    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:50.537786    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:50.538018    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:51.026950    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:51.027030    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:51.027030    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:51.027030    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:51.030389    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:51.030389    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:51.030389    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:51 GMT
	I0514 00:17:51.030389    4316 round_trippers.go:580]     Audit-Id: 98bc0eee-b424-4261-8299-d6d2273fd477
	I0514 00:17:51.030389    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:51.030389    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:51.030389    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:51.030389    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:51.031159    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:51.031785    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:51.031785    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:51.031785    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:51.031785    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:51.038960    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:51.038960    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:51.038960    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:51.038960    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:51.039722    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:51.039722    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:51 GMT
	I0514 00:17:51.039722    4316 round_trippers.go:580]     Audit-Id: d347b7a8-22be-418c-b29d-1c73e6a6cb47
	I0514 00:17:51.039722    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:51.039759    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:51.039759    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:51.523392    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:51.523490    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:51.523490    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:51.523490    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:51.527912    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:51.528020    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:51.528020    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:51.528020    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:51.528020    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:51 GMT
	I0514 00:17:51.528020    4316 round_trippers.go:580]     Audit-Id: efcd26a9-206e-4a12-b875-59c1b9a56667
	I0514 00:17:51.528020    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:51.528122    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:51.528382    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:51.528952    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:51.529014    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:51.529014    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:51.529014    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:51.532031    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:51.532031    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:51.532031    4316 round_trippers.go:580]     Audit-Id: e05f1928-9e76-42ab-b917-e5b46fe2af7b
	I0514 00:17:51.532031    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:51.532031    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:51.532031    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:51.532508    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:51.532508    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:51 GMT
	I0514 00:17:51.532858    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:52.022691    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:52.023148    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:52.023148    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:52.023148    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:52.027052    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:52.027052    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:52.027052    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:52.027052    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:52.027185    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:52.027185    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:52 GMT
	I0514 00:17:52.027185    4316 round_trippers.go:580]     Audit-Id: 46cf10a7-354f-4baf-a15a-e8397b0e1ded
	I0514 00:17:52.027185    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:52.027390    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:52.028263    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:52.028263    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:52.028263    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:52.028263    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:52.031481    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:52.031559    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:52.031559    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:52 GMT
	I0514 00:17:52.031559    4316 round_trippers.go:580]     Audit-Id: 9f69a66e-5696-4b63-a090-654f65b81422
	I0514 00:17:52.031628    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:52.031628    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:52.031660    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:52.031660    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:52.032201    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:52.523056    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:52.523056    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:52.523056    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:52.523056    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:52.526681    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:52.527238    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:52.527238    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:52 GMT
	I0514 00:17:52.527324    4316 round_trippers.go:580]     Audit-Id: dc3a466e-3935-4ecc-bb26-6b4b0364121f
	I0514 00:17:52.527324    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:52.527324    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:52.527324    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:52.527324    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:52.527478    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:52.528610    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:52.528610    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:52.528610    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:52.528699    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:52.531421    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:52.531756    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:52.531756    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:52 GMT
	I0514 00:17:52.531756    4316 round_trippers.go:580]     Audit-Id: af408f18-ae4f-44bf-988d-fbca2c6bf110
	I0514 00:17:52.531756    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:52.531756    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:52.531756    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:52.531756    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:52.531756    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:53.021593    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:53.021593    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:53.021593    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:53.021593    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:53.030886    4316 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0514 00:17:53.030886    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:53.030886    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:53.030886    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:53.030886    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:53.030886    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:53 GMT
	I0514 00:17:53.030886    4316 round_trippers.go:580]     Audit-Id: d5198b13-1397-4ebd-a609-4c9adfdcaa37
	I0514 00:17:53.030886    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:53.031480    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:53.031559    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:53.032089    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:53.032089    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:53.032127    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:53.034968    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:53.035117    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:53.035117    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:53 GMT
	I0514 00:17:53.035117    4316 round_trippers.go:580]     Audit-Id: 863871e3-cb22-4b48-ab59-9e78835abc08
	I0514 00:17:53.035117    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:53.035117    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:53.035117    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:53.035117    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:53.035117    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:53.521051    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:53.521051    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:53.521051    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:53.521051    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:53.524610    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:53.525142    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:53.525210    4316 round_trippers.go:580]     Audit-Id: 0a620f09-3bdd-45d6-8a96-2d80f3819fc8
	I0514 00:17:53.525210    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:53.525210    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:53.525210    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:53.525210    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:53.525210    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:53 GMT
	I0514 00:17:53.525417    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:53.526148    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:53.526173    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:53.526173    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:53.526173    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:53.530946    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:53.531092    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:53.531118    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:53 GMT
	I0514 00:17:53.531118    4316 round_trippers.go:580]     Audit-Id: 466c4b1e-c869-4a79-b505-3aaa73af7b4a
	I0514 00:17:53.531118    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:53.531118    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:53.531118    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:53.531118    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:53.531118    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:53.531847    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:54.025489    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:54.025489    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:54.025598    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:54.025598    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:54.030382    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:54.030382    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:54.030382    4316 round_trippers.go:580]     Audit-Id: b47d5652-2f48-45dd-baf2-ee76a3ece10a
	I0514 00:17:54.030382    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:54.030382    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:54.030382    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:54.030382    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:54.030382    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:54 GMT
	I0514 00:17:54.031721    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:54.032817    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:54.032817    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:54.032890    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:54.032890    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:54.037185    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:54.037185    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:54.037185    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:54.037185    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:54.037185    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:54.037185    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:54 GMT
	I0514 00:17:54.037185    4316 round_trippers.go:580]     Audit-Id: 9ac2b6ef-177a-4417-97fe-e3b597af0f9b
	I0514 00:17:54.037185    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:54.038153    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:54.522088    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:54.522088    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:54.522088    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:54.522088    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:54.525618    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:54.525618    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:54.525618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:54.525618    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:54 GMT
	I0514 00:17:54.525618    4316 round_trippers.go:580]     Audit-Id: ed1231eb-95a1-4ec2-a6ee-d52675b1c727
	I0514 00:17:54.525618    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:54.525618    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:54.525618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:54.527020    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:54.527870    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:54.527870    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:54.527870    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:54.527870    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:54.535265    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:54.535265    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:54.535265    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:54 GMT
	I0514 00:17:54.535265    4316 round_trippers.go:580]     Audit-Id: c485bb88-e899-4e8b-94dc-d819aee6a7d4
	I0514 00:17:54.535265    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:54.535265    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:54.535265    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:54.535265    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:54.535265    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:55.033364    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:55.033364    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:55.033364    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:55.033364    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:55.038301    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:55.038392    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:55.038392    4316 round_trippers.go:580]     Audit-Id: 63ece2ae-a610-4321-8d8c-e032e79b23d7
	I0514 00:17:55.038392    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:55.038392    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:55.038392    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:55.038392    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:55.038392    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:55 GMT
	I0514 00:17:55.038651    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:55.039811    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:55.039811    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:55.039897    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:55.039897    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:55.043267    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:55.043267    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:55.043267    4316 round_trippers.go:580]     Audit-Id: b67c0317-b8c3-40f2-bbd2-4aba63e33cd7
	I0514 00:17:55.043267    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:55.043267    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:55.043267    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:55.043267    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:55.043267    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:55 GMT
	I0514 00:17:55.043528    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:55.529916    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:55.529998    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:55.529998    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:55.529998    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:55.533310    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:55.533310    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:55.533310    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:55.533310    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:55.533310    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:55.533310    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:55 GMT
	I0514 00:17:55.533310    4316 round_trippers.go:580]     Audit-Id: 4f3ab88d-95b2-41ef-ba4d-70a22f28f91a
	I0514 00:17:55.533310    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:55.534150    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:55.534861    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:55.534861    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:55.534861    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:55.534918    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:55.537797    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:55.537857    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:55.537857    4316 round_trippers.go:580]     Audit-Id: 5b734f73-adce-46db-8245-6015aa6cbc02
	I0514 00:17:55.537857    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:55.537857    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:55.537896    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:55.537896    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:55.537896    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:55 GMT
	I0514 00:17:55.538002    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:55.538002    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:56.026948    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:56.027296    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:56.027296    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:56.027296    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:56.031178    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:56.031178    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:56.031178    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:56.031178    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:56 GMT
	I0514 00:17:56.031178    4316 round_trippers.go:580]     Audit-Id: d09419d3-13e1-4567-afaa-949a552f4f07
	I0514 00:17:56.031178    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:56.031265    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:56.031265    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:56.031265    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:56.032908    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:56.032992    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:56.032992    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:56.032992    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:56.036385    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:56.036385    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:56.036716    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:56.036716    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:56.036716    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:56.036716    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:56.036716    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:56 GMT
	I0514 00:17:56.036716    4316 round_trippers.go:580]     Audit-Id: e990ca4d-52bc-4ce8-b7ac-aa5512ac0ece
	I0514 00:17:56.036835    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:56.526963    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:56.526963    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:56.526963    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:56.526963    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:56.530895    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:56.530895    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:56.530895    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:56.530895    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:56 GMT
	I0514 00:17:56.530992    4316 round_trippers.go:580]     Audit-Id: adf99547-9dbd-485e-a3cd-4570031c5388
	I0514 00:17:56.530992    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:56.530992    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:56.530992    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:56.531186    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:56.532122    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:56.532122    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:56.532122    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:56.532122    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:56.534649    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:56.534649    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:56.534649    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:56.534649    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:56 GMT
	I0514 00:17:56.534649    4316 round_trippers.go:580]     Audit-Id: 95e68695-4c8c-47a4-bcbe-091d9e7ca165
	I0514 00:17:56.534649    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:56.535581    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:56.535581    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:56.535738    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:57.022063    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:57.022063    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:57.022063    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:57.022063    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:57.025716    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:57.025716    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:57.025716    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:57.025961    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:57.025961    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:57 GMT
	I0514 00:17:57.025961    4316 round_trippers.go:580]     Audit-Id: 68259e4b-4069-471b-a8b6-166e95a74498
	I0514 00:17:57.025961    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:57.025961    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:57.026103    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:57.026762    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:57.026762    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:57.026762    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:57.026762    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:57.029561    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:57.029561    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:57.029561    4316 round_trippers.go:580]     Audit-Id: 719b929f-0462-46cd-8554-0182ff5deb56
	I0514 00:17:57.029561    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:57.029561    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:57.029561    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:57.029561    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:57.029561    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:57 GMT
	I0514 00:17:57.030449    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:57.521718    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:57.521718    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:57.521718    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:57.521718    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:57.524837    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:57.524837    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:57.524837    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:57 GMT
	I0514 00:17:57.524837    4316 round_trippers.go:580]     Audit-Id: a5123dd3-6a71-45b4-929f-98de09033747
	I0514 00:17:57.524837    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:57.524837    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:57.524837    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:57.524837    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:57.525811    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:57.526499    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:57.526499    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:57.526499    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:57.526499    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:57.529860    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:57.529860    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:57.529860    4316 round_trippers.go:580]     Audit-Id: dd9b9175-3046-4168-87e4-ecbccf307082
	I0514 00:17:57.529860    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:57.529860    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:57.529860    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:57.529860    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:57.529994    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:57 GMT
	I0514 00:17:57.530421    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:58.023735    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:58.023735    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:58.023735    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:58.023735    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:58.028021    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:58.028462    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:58.028462    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:58.028462    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:58.028462    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:58.028462    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:58 GMT
	I0514 00:17:58.028462    4316 round_trippers.go:580]     Audit-Id: 3ad01ec4-c086-44bf-908c-a03eb33ea21d
	I0514 00:17:58.028462    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:58.028935    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:58.029809    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:58.029991    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:58.029991    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:58.029991    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:58.033352    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:58.033352    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:58.033352    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:58.033352    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:58.033352    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:58 GMT
	I0514 00:17:58.033352    4316 round_trippers.go:580]     Audit-Id: 6ed9c178-66b7-416e-985e-f90b677c332c
	I0514 00:17:58.033352    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:58.033352    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:58.034255    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:58.034756    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:58.523017    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:58.523017    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:58.523017    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:58.523017    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:58.526971    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:58.527044    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:58.527044    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:58.527044    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:58.527044    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:58.527044    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:58 GMT
	I0514 00:17:58.527129    4316 round_trippers.go:580]     Audit-Id: 10da8c0f-ad6d-4f65-8225-29ba4b0231a6
	I0514 00:17:58.527129    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:58.527407    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:58.528515    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:58.528593    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:58.528593    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:58.528593    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:58.531572    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:58.531572    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:58.531572    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:58.531572    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:58.531572    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:58 GMT
	I0514 00:17:58.531572    4316 round_trippers.go:580]     Audit-Id: b92384a7-0912-45d4-99ba-1addbaaf30c3
	I0514 00:17:58.531572    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:58.531572    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:58.532103    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:59.020580    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:59.020884    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:59.020884    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:59.020884    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:59.024921    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:59.025614    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:59.025614    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:59.025726    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:59.025726    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:59.025726    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:59 GMT
	I0514 00:17:59.025726    4316 round_trippers.go:580]     Audit-Id: a2fb6152-c040-4879-b53f-08f2bcfbc50a
	I0514 00:17:59.025726    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:59.025850    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:59.026944    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:59.027020    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:59.027082    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:59.027082    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:59.029904    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:59.029904    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:59.029904    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:59.029904    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:59.029904    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:59 GMT
	I0514 00:17:59.029904    4316 round_trippers.go:580]     Audit-Id: be1b58a7-12e4-48de-a7c8-744732e6b6db
	I0514 00:17:59.029904    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:59.029904    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:59.029904    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:59.519408    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:59.519649    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:59.519649    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:59.519649    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:59.523214    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:59.524188    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:59.524188    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:59.524188    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:59.524188    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:59.524188    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:59 GMT
	I0514 00:17:59.524188    4316 round_trippers.go:580]     Audit-Id: 85d171de-4c07-4851-b217-65dbffd5c873
	I0514 00:17:59.524290    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:59.524366    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:59.525028    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:59.525028    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:59.525028    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:59.525551    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:59.531915    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:59.531915    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:59.531915    4316 round_trippers.go:580]     Audit-Id: ff26dd51-776d-42ba-9ace-873308d21e37
	I0514 00:17:59.531915    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:59.531915    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:59.531915    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:59.531915    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:59.531915    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:59 GMT
	I0514 00:17:59.532507    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:00.025115    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:18:00.025115    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:00.025115    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:00.025173    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:00.028377    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:00.028377    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:00.028377    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:00.028377    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:00.028947    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:00.028947    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:00.028947    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:00 GMT
	I0514 00:18:00.028947    4316 round_trippers.go:580]     Audit-Id: 5659c070-8a0e-4100-be51-3155801ecefc
	I0514 00:18:00.029104    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:18:00.029878    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:00.029878    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:00.029967    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:00.029967    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:00.058381    4316 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0514 00:18:00.059359    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:00.059401    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:00.059401    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:00.059401    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:00.059401    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:00.059401    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:00 GMT
	I0514 00:18:00.059401    4316 round_trippers.go:580]     Audit-Id: 46df3cf4-5a07-4dca-abe8-bf1c00a5409b
	I0514 00:18:00.059901    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:00.059901    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:18:00.524592    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:18:00.524592    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:00.524592    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:00.524592    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:00.528956    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:18:00.528956    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:00.528956    4316 round_trippers.go:580]     Audit-Id: 6b0d422d-1c7f-4d13-afa0-0bb07da07442
	I0514 00:18:00.528956    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:00.528956    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:00.528956    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:00.528956    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:00.528956    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:00 GMT
	I0514 00:18:00.530013    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:18:00.531146    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:00.531146    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:00.531225    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:00.531225    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:00.538426    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:18:00.539198    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:00.539198    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:00.539198    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:00.539198    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:00 GMT
	I0514 00:18:00.539198    4316 round_trippers.go:580]     Audit-Id: 08fde253-009a-4fa4-a7b3-70c265d850f1
	I0514 00:18:00.539198    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:00.539262    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:00.539439    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.033970    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:18:01.034341    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.034341    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.034341    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.037696    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:01.037696    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.037696    4316 round_trippers.go:580]     Audit-Id: e4350530-c24e-416d-b671-826d40a28a66
	I0514 00:18:01.038194    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.038194    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.038194    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.038194    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.038194    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.038409    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:18:01.039048    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:01.039048    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.039048    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.039048    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.043288    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:18:01.043975    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.043975    4316 round_trippers.go:580]     Audit-Id: cd43dbcf-4193-4f7c-8595-35b44eacd72b
	I0514 00:18:01.043975    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.044096    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.044096    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.044096    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.044096    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.044096    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.531936    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:18:01.531936    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.531936    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.531936    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.534663    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.535611    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.535611    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.535611    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.535611    4316 round_trippers.go:580]     Audit-Id: 4d7e7edb-687a-4196-a485-9b840fe63b11
	I0514 00:18:01.535611    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.535611    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.535611    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.535819    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0514 00:18:01.536407    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:01.536407    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.536407    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.536532    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.539719    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:01.539719    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.539719    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.539719    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.539820    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.539820    4316 round_trippers.go:580]     Audit-Id: 9bc018a7-bb5b-45af-9a9d-23eab50fdf69
	I0514 00:18:01.539820    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.539820    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.540050    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.540439    4316 pod_ready.go:92] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:01.540501    4316 pod_ready.go:81] duration metric: took 25.5219074s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.540501    4316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.540617    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-101100
	I0514 00:18:01.540617    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.540617    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.540617    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.543931    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.543931    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.543931    4316 round_trippers.go:580]     Audit-Id: a4d5238b-9208-4c3d-99ab-6ec97ec1b248
	I0514 00:18:01.543931    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.543931    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.543931    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.543931    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.543931    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.543931    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"74cd34fe-a56b-453d-afb3-a9db3db0d5ba","resourceVersion":"1779","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.122:2379","kubernetes.io/config.hash":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.mirror":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.seen":"2024-05-14T00:16:49.843121737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0514 00:18:01.543931    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:01.543931    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.543931    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.543931    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.547176    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:01.547176    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.547176    4316 round_trippers.go:580]     Audit-Id: 0bd1c733-6404-4efb-9feb-211c75cce9c6
	I0514 00:18:01.547176    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.547176    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.547176    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.547176    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.547176    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.547733    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.548182    4316 pod_ready.go:92] pod "etcd-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:01.548239    4316 pod_ready.go:81] duration metric: took 7.7376ms for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.548239    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.548297    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-101100
	I0514 00:18:01.548377    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.548377    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.548377    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.550708    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.550708    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.550708    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.550708    4316 round_trippers.go:580]     Audit-Id: e6549549-8b0a-465f-ae81-e4500ff8c23b
	I0514 00:18:01.550708    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.550708    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.550708    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.550708    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.551708    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-101100","namespace":"kube-system","uid":"60889645-4c2d-4cfc-b322-c0f1b6e34503","resourceVersion":"1775","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.122:8443","kubernetes.io/config.hash":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.mirror":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.seen":"2024-05-14T00:16:49.778409853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0514 00:18:01.551708    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:01.551708    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.551708    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.551708    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.554577    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.554935    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.554935    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.554935    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.554935    4316 round_trippers.go:580]     Audit-Id: 25f56a5f-ef6a-4957-a46d-45444bacea79
	I0514 00:18:01.554935    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.554935    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.554935    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.555140    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.555496    4316 pod_ready.go:92] pod "kube-apiserver-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:01.555496    4316 pod_ready.go:81] duration metric: took 7.1994ms for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.555496    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.555621    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-101100
	I0514 00:18:01.555621    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.555621    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.555621    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.557990    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.557990    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.557990    4316 round_trippers.go:580]     Audit-Id: fe1fefce-483d-44cd-b309-d34878a37069
	I0514 00:18:01.557990    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.557990    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.557990    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.557990    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.557990    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.558434    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-101100","namespace":"kube-system","uid":"1a74381a-7477-4fd3-b344-c4a230014f97","resourceVersion":"1752","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.mirror":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.seen":"2024-05-13T23:56:09.392106640Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0514 00:18:01.559028    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:01.559094    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.559094    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.559094    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.560992    4316 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0514 00:18:01.560992    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.560992    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.561620    4316 round_trippers.go:580]     Audit-Id: 6307f424-36a6-466e-9567-4fe96b8d38f6
	I0514 00:18:01.561620    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.561620    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.561620    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.561620    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.561832    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.561832    4316 pod_ready.go:92] pod "kube-controller-manager-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:01.561832    4316 pod_ready.go:81] duration metric: took 6.2693ms for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.561832    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.561832    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:18:01.561832    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.561832    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.561832    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.564515    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.565329    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.565329    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.565329    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.565329    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.565329    4316 round_trippers.go:580]     Audit-Id: e4d422cf-3312-4572-95b7-3cd989d5170b
	I0514 00:18:01.565329    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.565329    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.565644    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8zsgn","generateName":"kube-proxy-","namespace":"kube-system","uid":"af208cbd-fa8a-4822-9b19-dc30f63fa59c","resourceVersion":"1621","creationTimestamp":"2024-05-14T00:03:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0514 00:18:01.566193    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:18:01.566193    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.566193    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.566193    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.569952    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:01.569952    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.569952    4316 round_trippers.go:580]     Audit-Id: e8babbe8-c2d4-4bdf-9dda-6009c6329cda
	I0514 00:18:01.569952    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.569952    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.569952    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.569952    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.569952    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.569952    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"fd2d4a0b-dc97-4959-b2ba-0f51719ad2b3","resourceVersion":"1836","creationTimestamp":"2024-05-14T00:12:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_12_45_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:12:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0514 00:18:01.569952    4316 pod_ready.go:97] node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:18:01.569952    4316 pod_ready.go:81] duration metric: took 8.12ms for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	E0514 00:18:01.569952    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:18:01.569952    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.735815    4316 request.go:629] Waited for 165.8525ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:18:01.736053    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:18:01.736053    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.736053    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.736053    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.740492    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:18:01.740492    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.740492    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.740492    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.740492    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.740492    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.740492    4316 round_trippers.go:580]     Audit-Id: cee2f8af-4f02-4a05-85ab-785fc8dcfbd3
	I0514 00:18:01.740492    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.741005    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-b25hq","generateName":"kube-proxy-","namespace":"kube-system","uid":"d39f5818-3e88-4162-a7ce-734ca28103bf","resourceVersion":"1641","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0514 00:18:01.941794    4316 request.go:629] Waited for 199.7542ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:18:01.941970    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:18:01.941970    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.941970    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.941970    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.948189    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:18:01.949105    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.949105    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:02 GMT
	I0514 00:18:01.949105    4316 round_trippers.go:580]     Audit-Id: 1d549463-bcf5-4662-b83f-0fb779213b5e
	I0514 00:18:01.949105    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.949105    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.949105    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.949105    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.949105    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"1842","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4582 chars]
	I0514 00:18:01.949955    4316 pod_ready.go:97] node "multinode-101100-m02" hosting pod "kube-proxy-b25hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m02" has status "Ready":"Unknown"
	I0514 00:18:01.949955    4316 pod_ready.go:81] duration metric: took 379.9789ms for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	E0514 00:18:01.949955    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100-m02" hosting pod "kube-proxy-b25hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m02" has status "Ready":"Unknown"
	I0514 00:18:01.949955    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:02.143393    4316 request.go:629] Waited for 193.3172ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:18:02.143763    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:18:02.143763    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:02.143763    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:02.143763    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:02.147971    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:18:02.148220    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:02.148220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:02.148220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:02.148220    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:02 GMT
	I0514 00:18:02.148220    4316 round_trippers.go:580]     Audit-Id: c14fdba1-417a-4f85-939c-db933bba548d
	I0514 00:18:02.148220    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:02.148220    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:02.148360    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zhcz6","generateName":"kube-proxy-","namespace":"kube-system","uid":"a9a488af-41ba-47f3-87b0-5a2f062afad6","resourceVersion":"1732","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0514 00:18:02.332179    4316 request.go:629] Waited for 183.029ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:02.332355    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:02.332355    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:02.332355    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:02.332457    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:02.338298    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:18:02.338298    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:02.338298    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:02.338298    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:02 GMT
	I0514 00:18:02.338298    4316 round_trippers.go:580]     Audit-Id: 4b8848c3-b000-4713-88f5-f88264a7ce60
	I0514 00:18:02.338298    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:02.338298    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:02.338298    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:02.338298    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:02.339570    4316 pod_ready.go:92] pod "kube-proxy-zhcz6" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:02.339602    4316 pod_ready.go:81] duration metric: took 389.6226ms for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:02.339602    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:02.533116    4316 request.go:629] Waited for 193.3558ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:18:02.533580    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:18:02.533674    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:02.533674    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:02.533674    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:02.536976    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:02.536976    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:02.536976    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:02.536976    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:02.536976    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:02.536976    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:02.537231    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:02 GMT
	I0514 00:18:02.537231    4316 round_trippers.go:580]     Audit-Id: 90628ba1-abda-4268-9296-71c2992d3d08
	I0514 00:18:02.537492    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-101100","namespace":"kube-system","uid":"d7300c2d-377f-4061-bd34-5f7593b7e827","resourceVersion":"1756","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.mirror":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.seen":"2024-05-13T23:56:09.392108241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0514 00:18:02.733903    4316 request.go:629] Waited for 195.5807ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:02.733903    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:02.733903    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:02.733903    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:02.733903    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:02.737519    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:02.737519    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:02.737519    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:02.737519    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:02.737519    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:02.737519    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:02 GMT
	I0514 00:18:02.737519    4316 round_trippers.go:580]     Audit-Id: 61cf8449-5c39-452f-9021-4fb1e40d8ce9
	I0514 00:18:02.737519    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:02.738146    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:02.738146    4316 pod_ready.go:92] pod "kube-scheduler-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:02.738146    4316 pod_ready.go:81] duration metric: took 398.5183ms for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:02.738146    4316 pod_ready.go:38] duration metric: took 26.7304415s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:18:02.738146    4316 api_server.go:52] waiting for apiserver process to appear ...
	I0514 00:18:02.745047    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0514 00:18:02.763952    4316 command_runner.go:130] > da9e6534cd87
	I0514 00:18:02.763952    4316 logs.go:276] 1 containers: [da9e6534cd87]
	I0514 00:18:02.770566    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0514 00:18:02.786853    4316 command_runner.go:130] > 08450c853590
	I0514 00:18:02.788436    4316 logs.go:276] 1 containers: [08450c853590]
	I0514 00:18:02.794094    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0514 00:18:02.812312    4316 command_runner.go:130] > dcc5a109288b
	I0514 00:18:02.812566    4316 command_runner.go:130] > 76c5ab7859ef
	I0514 00:18:02.813606    4316 logs.go:276] 2 containers: [dcc5a109288b 76c5ab7859ef]
	I0514 00:18:02.819195    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0514 00:18:02.840176    4316 command_runner.go:130] > d3581c1c570c
	I0514 00:18:02.840855    4316 command_runner.go:130] > 964887fc5d36
	I0514 00:18:02.841379    4316 logs.go:276] 2 containers: [d3581c1c570c 964887fc5d36]
	I0514 00:18:02.847426    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0514 00:18:02.865799    4316 command_runner.go:130] > b2a1b31cd7de
	I0514 00:18:02.865799    4316 command_runner.go:130] > 91edaaa00da2
	I0514 00:18:02.865799    4316 logs.go:276] 2 containers: [b2a1b31cd7de 91edaaa00da2]
	I0514 00:18:02.871795    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0514 00:18:02.895811    4316 command_runner.go:130] > b87239d1199a
	I0514 00:18:02.895811    4316 command_runner.go:130] > e96f94398d6d
	I0514 00:18:02.895811    4316 logs.go:276] 2 containers: [b87239d1199a e96f94398d6d]
	I0514 00:18:02.902449    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0514 00:18:02.921399    4316 command_runner.go:130] > 2b424a7cd98c
	I0514 00:18:02.921399    4316 command_runner.go:130] > b7d8d9a5e5ea
	I0514 00:18:02.922943    4316 logs.go:276] 2 containers: [2b424a7cd98c b7d8d9a5e5ea]
	I0514 00:18:02.923035    4316 logs.go:123] Gathering logs for container status ...
	I0514 00:18:02.923035    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0514 00:18:02.985754    4316 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0514 00:18:02.985876    4316 command_runner.go:130] > 3d0b2f0362eb4       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   8cb9b6d6d0915       busybox-fc5497c4f-xqj6w
	I0514 00:18:02.985908    4316 command_runner.go:130] > dcc5a109288b6       cbb01a7bd410d                                                                                         3 seconds ago        Running             coredns                   1                   1cccb5e8cee3b       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:02.985908    4316 command_runner.go:130] > bde84ba2d4ed7       6e38f40d628db                                                                                         24 seconds ago       Running             storage-provisioner       2                   468a0e2976ae4       storage-provisioner
	I0514 00:18:02.985969    4316 command_runner.go:130] > 2b424a7cd98c8       4950bb10b3f87                                                                                         36 seconds ago       Running             kindnet-cni               2                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:02.985999    4316 command_runner.go:130] > b7d8d9a5e5eaf       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:02.986049    4316 command_runner.go:130] > b142687b621f1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   468a0e2976ae4       storage-provisioner
	I0514 00:18:02.986082    4316 command_runner.go:130] > b2a1b31cd7dee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   a8ac60a565998       kube-proxy-zhcz6
	I0514 00:18:02.986082    4316 command_runner.go:130] > 08450c853590d       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   419648c0d4053       etcd-multinode-101100
	I0514 00:18:02.986182    4316 command_runner.go:130] > da9e6534cd87d       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   509b8407e0955       kube-apiserver-multinode-101100
	I0514 00:18:02.986182    4316 command_runner.go:130] > d3581c1c570cf       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   ddcaadef980ac       kube-scheduler-multinode-101100
	I0514 00:18:02.986219    4316 command_runner.go:130] > b87239d1199ab       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   659643d47b9ae       kube-controller-manager-multinode-101100
	I0514 00:18:02.986259    4316 command_runner.go:130] > 57dea5416eb67       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago       Exited              busybox                   0                   76d1b8ce19aba       busybox-fc5497c4f-xqj6w
	I0514 00:18:02.986259    4316 command_runner.go:130] > 76c5ab7859eff       cbb01a7bd410d                                                                                         21 minutes ago       Exited              coredns                   0                   8bb49b28c842a       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:02.986295    4316 command_runner.go:130] > 91edaaa00da23       a0bf559e280cf                                                                                         21 minutes ago       Exited              kube-proxy                0                   9bd694480978f       kube-proxy-zhcz6
	I0514 00:18:02.986335    4316 command_runner.go:130] > e96f94398d6dd       c7aad43836fa5                                                                                         22 minutes ago       Exited              kube-controller-manager   0                   da9268fd6556b       kube-controller-manager-multinode-101100
	I0514 00:18:02.986378    4316 command_runner.go:130] > 964887fc5d362       259c8277fcbbc                                                                                         22 minutes ago       Exited              kube-scheduler            0                   fcb3b27edcd2a       kube-scheduler-multinode-101100
	I0514 00:18:02.988807    4316 logs.go:123] Gathering logs for coredns [76c5ab7859ef] ...
	I0514 00:18:02.988879    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5ab7859ef"
	I0514 00:18:03.013508    4316 command_runner.go:130] > .:53
	I0514 00:18:03.013539    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:03.013539    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:03.013539    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:03.013620    4316 command_runner.go:130] > [INFO] 127.0.0.1:57161 - 45698 "HINFO IN 8990392176501838712.5889638972791529478. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051692136s
	I0514 00:18:03.013620    4316 command_runner.go:130] > [INFO] 10.244.1.2:55099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211505s
	I0514 00:18:03.013620    4316 command_runner.go:130] > [INFO] 10.244.1.2:55878 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185519855s
	I0514 00:18:03.013694    4316 command_runner.go:130] > [INFO] 10.244.1.2:33619 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15684109s
	I0514 00:18:03.013694    4316 command_runner.go:130] > [INFO] 10.244.1.2:49440 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.197645067s
	I0514 00:18:03.013694    4316 command_runner.go:130] > [INFO] 10.244.0.3:50960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430608s
	I0514 00:18:03.013694    4316 command_runner.go:130] > [INFO] 10.244.0.3:46839 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000167103s
	I0514 00:18:03.013694    4316 command_runner.go:130] > [INFO] 10.244.0.3:55330 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000155803s
	I0514 00:18:03.013776    4316 command_runner.go:130] > [INFO] 10.244.0.3:50874 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000131802s
	I0514 00:18:03.013776    4316 command_runner.go:130] > [INFO] 10.244.1.2:53724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096802s
	I0514 00:18:03.013847    4316 command_runner.go:130] > [INFO] 10.244.1.2:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.042707366s
	I0514 00:18:03.013847    4316 command_runner.go:130] > [INFO] 10.244.1.2:54429 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000269706s
	I0514 00:18:03.013847    4316 command_runner.go:130] > [INFO] 10.244.1.2:48558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000262605s
	I0514 00:18:03.013847    4316 command_runner.go:130] > [INFO] 10.244.1.2:46986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023487677s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:60460 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174903s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:60672 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204304s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:36311 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110402s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:43910 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301006s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:52495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000145803s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:46357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066702s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:41390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062301s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:35739 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084301s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:44800 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163303s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:57631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068702s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:50842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135702s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:41210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204604s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:57858 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073801s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:48782 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152303s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:36081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121002s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:46909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115002s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:36030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220205s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:56187 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059401s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:51500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099802s
	I0514 00:18:03.014495    4316 command_runner.go:130] > [INFO] 10.244.1.2:57247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147903s
	I0514 00:18:03.014495    4316 command_runner.go:130] > [INFO] 10.244.1.2:46132 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170203s
	I0514 00:18:03.014552    4316 command_runner.go:130] > [INFO] 10.244.1.2:57206 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000452309s
	I0514 00:18:03.014552    4316 command_runner.go:130] > [INFO] 10.244.1.2:44795 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000146203s
	I0514 00:18:03.014588    4316 command_runner.go:130] > [INFO] 10.244.0.3:33385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082102s
	I0514 00:18:03.014649    4316 command_runner.go:130] > [INFO] 10.244.0.3:56742 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173704s
	I0514 00:18:03.014649    4316 command_runner.go:130] > [INFO] 10.244.0.3:46927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185904s
	I0514 00:18:03.014716    4316 command_runner.go:130] > [INFO] 10.244.0.3:42956 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000054801s
	I0514 00:18:03.014758    4316 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0514 00:18:03.014758    4316 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0514 00:18:03.018567    4316 logs.go:123] Gathering logs for kube-scheduler [d3581c1c570c] ...
	I0514 00:18:03.018567    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3581c1c570c"
	I0514 00:18:03.040884    4316 command_runner.go:130] ! I0514 00:16:52.716401       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:03.040884    4316 command_runner.go:130] ! W0514 00:16:54.858727       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:03.040884    4316 command_runner.go:130] ! W0514 00:16:54.858778       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.040884    4316 command_runner.go:130] ! W0514 00:16:54.858790       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:03.040884    4316 command_runner.go:130] ! W0514 00:16:54.858800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:03.040884    4316 command_runner.go:130] ! I0514 00:16:54.945438       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:03.040884    4316 command_runner.go:130] ! I0514 00:16:54.945867       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.041447    4316 command_runner.go:130] ! I0514 00:16:54.953986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:03.041447    4316 command_runner.go:130] ! I0514 00:16:54.957180       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:03.041479    4316 command_runner.go:130] ! I0514 00:16:54.957284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:03.041479    4316 command_runner.go:130] ! I0514 00:16:54.957493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:03.041479    4316 command_runner.go:130] ! I0514 00:16:55.058381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:03.043755    4316 logs.go:123] Gathering logs for kube-scheduler [964887fc5d36] ...
	I0514 00:18:03.043755    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964887fc5d36"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:04.693680       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.133341       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.133396       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.133407       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.133415       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.170291       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.170533       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.174536       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.174684       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.174703       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.174918       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.182722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.186053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.183583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.183698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.183781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.183835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.183868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.184039       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.186929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.186969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.187026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.188647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.188112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.188121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.188233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.188242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.189252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.189533       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.189643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.189773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.190106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.190324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.190538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.191036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.191581       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.192160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.191626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.069820    4316 command_runner.go:130] ! E0513 23:56:06.192721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.069820    4316 command_runner.go:130] ! W0513 23:56:06.190821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:03.069865    4316 command_runner.go:130] ! E0513 23:56:06.193134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:03.069865    4316 command_runner.go:130] ! W0513 23:56:07.154218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:03.069926    4316 command_runner.go:130] ! E0513 23:56:07.155376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:03.069964    4316 command_runner.go:130] ! W0513 23:56:07.229548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:03.069964    4316 command_runner.go:130] ! E0513 23:56:07.229613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:03.070012    4316 command_runner.go:130] ! W0513 23:56:07.344429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070049    4316 command_runner.go:130] ! E0513 23:56:07.344853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070049    4316 command_runner.go:130] ! W0513 23:56:07.410556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:03.070049    4316 command_runner.go:130] ! E0513 23:56:07.410716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:03.070102    4316 command_runner.go:130] ! W0513 23:56:07.423084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:03.070136    4316 command_runner.go:130] ! E0513 23:56:07.423126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:03.070193    4316 command_runner.go:130] ! W0513 23:56:07.467897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:03.070243    4316 command_runner.go:130] ! E0513 23:56:07.467939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:03.070277    4316 command_runner.go:130] ! W0513 23:56:07.484903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:03.070315    4316 command_runner.go:130] ! E0513 23:56:07.485019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:03.070315    4316 command_runner.go:130] ! W0513 23:56:07.545758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:03.070379    4316 command_runner.go:130] ! E0513 23:56:07.546087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.573884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.573980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.633780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.633901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.680821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.680938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.704130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.704357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.736914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.737079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.754367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.754798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! I0513 23:56:09.676327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0514 00:14:35.689344       1 run.go:74] "command failed" err="finished without leader elect"
	I0514 00:18:03.079025    4316 logs.go:123] Gathering logs for kindnet [2b424a7cd98c] ...
	I0514 00:18:03.079025    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b424a7cd98c"
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:28.349800       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:28.349935       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:03.104051    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:28.441282       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:28.441413       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:28.441441       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045047       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045110       1 main.go:227] handling current node
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045545       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045580       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045839       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.109.58 Flags: [] Table: 0} 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045983       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045993       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.046039       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.102.231 Flags: [] Table: 0} 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.055904       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.056127       1 main.go:227] handling current node
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.056141       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.056155       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.056412       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.056502       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062369       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062453       1 main.go:227] handling current node
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062465       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062483       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062816       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062843       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075229       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075506       1 main.go:227] handling current node
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075588       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075650       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075827       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075835       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:03.106777    4316 logs.go:123] Gathering logs for Docker ...
	I0514 00:18:03.106854    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0514 00:18:03.138012    4316 command_runner.go:130] > May 14 00:15:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:03.138075    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:03.138075    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:03.138118    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:03.138118    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:03.138118    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:03.138118    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:03.138224    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138262    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:03.138262    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138318    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:03.138318    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:03.138387    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349024460Z" level=info msg="Starting up"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349886331Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:03.138940    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.351031392Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0514 00:18:03.138980    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.380428255Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:03.139038    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407060046Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:03.139038    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407104860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:03.139076    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407157277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:03.139162    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407182685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139208    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408093872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.139246    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408200005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139290    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408421875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.139327    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408522107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139373    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408552116Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:03.139411    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408565820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139455    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409126597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139493    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409855027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139574    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412841968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.139617    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412982412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139654    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413109352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.139701    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413195779Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:03.139738    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414192994Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:03.139782    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414303628Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:03.139819    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414321234Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:03.139864    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420644226Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:03.139902    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420793973Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:03.139902    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420815380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:03.139947    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420835086Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:03.139979    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420849391Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:03.140017    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421006640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:03.140048    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421303834Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:03.140048    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421395163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:03.140086    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421479890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:03.140120    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421494994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:03.140190    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421507198Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140227    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421523703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140273    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421540509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140313    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421554613Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140359    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421571518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140359    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421584022Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140396    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421594526Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140479    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421604629Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140527    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421626336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140565    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421639040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140609    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421651344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140609    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421662947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140646    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421673350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140729    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421684554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140729    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421695257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140813    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421705961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140813    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421717564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140867    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421730268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140906    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421774782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140906    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421787286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140944    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421797990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140944    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421811094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:03.140983    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421828299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.141022    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421838703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.141022    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421849206Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:03.141060    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421898721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:03.141093    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421926330Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:03.141132    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421987549Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:03.141171    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422004755Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:03.141208    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422070276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.141208    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422106987Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:03.141247    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422118891Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:03.141247    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422453196Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:03.141284    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422571233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:03.141284    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422619148Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:03.141318    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422687970Z" level=info msg="containerd successfully booted in 0.044863s"
	I0514 00:18:03.141318    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.404653025Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:03.141354    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.578701970Z" level=info msg="Loading containers: start."
	I0514 00:18:03.141387    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.027152626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:03.141387    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.105905244Z" level=info msg="Loading containers: done."
	I0514 00:18:03.141424    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.135340666Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:03.141457    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.136139953Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:03.141457    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.185948604Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:03.141494    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.186071317Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:03.141494    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:03.141527    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 systemd[1]: Stopping Docker Application Container Engine...
	I0514 00:18:03.141527    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.988898314Z" level=info msg="Processing signal 'terminated'"
	I0514 00:18:03.141564    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.989838579Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0514 00:18:03.141564    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990583130Z" level=info msg="Daemon shutdown complete"
	I0514 00:18:03.141602    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990661536Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0514 00:18:03.141640    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990696238Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0514 00:18:03.141640    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: docker.service: Deactivated successfully.
	I0514 00:18:03.141678    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: Stopped Docker Application Container Engine.
	I0514 00:18:03.141678    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:03.141678    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.059729298Z" level=info msg="Starting up"
	I0514 00:18:03.141716    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.060541955Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:03.141749    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.061850245Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1055
	I0514 00:18:03.141749    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.092613476Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:03.141786    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115368453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:03.141818    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115403155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:03.141818    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115435257Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:03.141855    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115450359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.141887    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115473760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.141924    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115486261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.141924    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115635771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.141962    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115738478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.141999    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115756280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:03.141999    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115766280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.142038    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115789882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.142074    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.116031099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.142107    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119790059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.142144    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119888566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.142144    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120181886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.142176    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120287794Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:03.142213    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120385900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:03.142246    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120406702Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:03.142282    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120419603Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:03.142317    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120713023Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:03.142354    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120746825Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:03.142354    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120760126Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:03.142386    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120773227Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:03.142423    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120785328Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:03.142423    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120826831Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:03.142456    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120999543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:03.142493    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121054147Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:03.142493    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121092049Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:03.142531    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121102050Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:03.142568    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121115951Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142568    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121126152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142602    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121135052Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142631    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121145153Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142656    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121156354Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142707    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121165854Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142731    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121175255Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142780    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121184656Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142780    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121204657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142815    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121216358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142815    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121225759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142862    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121235159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121243960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121254361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121263161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121275762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121287763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121299564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121364668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121378369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121388070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121400871Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121421772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121432873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121442174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121474076Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121485477Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121493977Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121504178Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121548581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121558382Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121570783Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121732894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121765696Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121795498Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121808099Z" level=info msg="containerd successfully booted in 0.031442s"
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.110784113Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.142577516Z" level=info msg="Loading containers: start."
	I0514 00:18:03.143418    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.405628939Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:03.143418    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.480865351Z" level=info msg="Loading containers: done."
	I0514 00:18:03.143458    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503621028Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:03.143493    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503703734Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:03.143493    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545253312Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545312016Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Loaded network plugin cni"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start cri-dockerd grpc backend"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xqj6w_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2\""
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4kmx4_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697\""
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717439407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717535614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717551915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.718214261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720663031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720923549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721017455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721295774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783128658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783344773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783450280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144047    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783657895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144085    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816093342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144085    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816151946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144120    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816166547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816251853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ddcaadef980aca40a7740fe7c59949c3cb803d9fb441eca155b02162f3422bb8/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/659643d47b9ae231a8b97d9871cab6dfac5f6d06e647c919d14170832ee47683/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.258935521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.259980593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260187008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260361520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272553064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272771779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272798781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272907589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314782590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314905098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314946601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.315263523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.385829312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386016625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386135333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386495758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:55Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444453862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144676    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444531867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444549969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444647976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.461909471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462106685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462142187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462265196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492511091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492965923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493135035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493390352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8ac60a565998ca52581e38272f2fcdb5f7038023f93d728cd74f5b89f5593ed/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/468a0e2976ae45a571a99afabfcd1329c76873e973179fe56cc9ef46e2533698/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849392115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849539826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849623331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849861048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857219658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857468675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857687390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.858016113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5233e076edceb93931d756579982e556959dfd31508760da215a8407dca14e56/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218178264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218325574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218348976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218459383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145229    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:17.430189771Z" level=info msg="ignoring event" container=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:03.145229    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431460316Z" level=info msg="shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:03.145229    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431869631Z" level=warning msg="cleaning up after shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:03.145229    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.432007736Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:03.145407    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:27.281698284Z" level=info msg="ignoring event" container=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:03.145455    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.282877145Z" level=info msg="shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:03.145455    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283000451Z" level=warning msg="cleaning up after shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:03.145488    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283015352Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:03.145519    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.098999177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145590    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099271791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145590    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099326694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145632    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099641511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145662    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.092603581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145704    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093732951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145704    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093768053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145734    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.095427255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145807    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235051362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145807    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235156269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145848    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235169170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145879    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235258576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145920    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235645702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145920    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235713507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145951    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235730808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235828014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743900500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743970305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.744406335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.745139484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808545660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808756974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808962988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.809189903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:03.174078    4316 logs.go:123] Gathering logs for kubelet ...
	I0514 00:18:03.174078    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0514 00:18:03.194098    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:03.194866    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507609    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:03.194866    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507660    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.194866    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.508230    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:03.194982    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: E0514 00:16:46.508906    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:03.194982    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229791    1441 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229941    1441 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.230764    1441 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: E0514 00:16:47.231303    1441 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717000    1520 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717452    1520 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717850    1520 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.719747    1520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.734764    1520 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754342    1520 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754443    1520 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0514 00:18:03.195578    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755707    1520 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0514 00:18:03.195680    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755788    1520 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-101100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0514 00:18:03.195860    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756671    1520 topology_manager.go:138] "Creating topology manager with none policy"
	I0514 00:18:03.195927    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756747    1520 container_manager_linux.go:301] "Creating device plugin manager"
	I0514 00:18:03.195964    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.757344    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:03.195964    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.758885    1520 kubelet.go:400] "Attempting to sync node with API server"
	I0514 00:18:03.196026    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759591    1520 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0514 00:18:03.196063    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759727    1520 kubelet.go:312] "Adding apiserver pod source"
	I0514 00:18:03.196063    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.760630    1520 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0514 00:18:03.196164    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.765370    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.196224    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.765512    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.196322    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.767039    1520 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0514 00:18:03.196358    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.771297    1520 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0514 00:18:03.196419    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.771834    1520 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0514 00:18:03.196460    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.773545    1520 server.go:1264] "Started kubelet"
	I0514 00:18:03.196460    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.773829    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.196558    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.774013    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.196757    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.780360    1520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.102.122:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-101100.17cf32c62bf0274b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-101100,UID:multinode-101100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-101100,},FirstTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,LastTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-1
01100,}"
	I0514 00:18:03.196844    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.781297    1520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0514 00:18:03.196844    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.786484    1520 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0514 00:18:03.196844    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.787784    1520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0514 00:18:03.196940    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.792005    1520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0514 00:18:03.196940    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.800317    1520 server.go:455] "Adding debug handlers to kubelet server"
	I0514 00:18:03.196940    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805202    1520 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0514 00:18:03.197042    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805290    1520 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0514 00:18:03.197042    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812186    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="200ms"
	I0514 00:18:03.197176    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.812333    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.197239    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812369    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.197281    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816781    1520 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0514 00:18:03.197281    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816881    1520 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0514 00:18:03.197374    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816892    1520 factory.go:221] Registration of the systemd container factory successfully
	I0514 00:18:03.197374    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849206    1520 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0514 00:18:03.197374    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849426    1520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0514 00:18:03.197479    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849585    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:03.197479    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850764    1520 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0514 00:18:03.197689    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850799    1520 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0514 00:18:03.197689    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850826    1520 policy_none.go:49] "None policy: Start"
	I0514 00:18:03.197794    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.855604    1520 reconciler.go:26] "Reconciler: start to sync state"
	I0514 00:18:03.197794    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884024    1520 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0514 00:18:03.197794    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884165    1520 state_mem.go:35] "Initializing new in-memory state store"
	I0514 00:18:03.197888    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.886215    1520 state_mem.go:75] "Updated machine memory state"
	I0514 00:18:03.197888    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888657    1520 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0514 00:18:03.197982    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888839    1520 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0514 00:18:03.197982    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.891306    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0514 00:18:03.198075    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.897961    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0514 00:18:03.198075    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898040    1520 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0514 00:18:03.198075    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898088    1520 kubelet.go:2337] "Starting kubelet main sync loop"
	I0514 00:18:03.198168    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.898127    1520 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0514 00:18:03.198168    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898551    1520 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0514 00:18:03.198261    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.899218    1520 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-101100\" not found"
	I0514 00:18:03.198261    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.900215    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.198357    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.900324    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.198504    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.907443    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:03.198583    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.909152    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:03.198639    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.912132    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:03.198678    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:03.198711    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:03.198711    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:03.198711    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:03.199339    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999139    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7c140951f4f8270da243f55135e9f108f3cdf5ef11a4e990e06822ace5adbd"
	I0514 00:18:03.199432    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999762    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d7537422a83c9a57ab3bed978e87441e2725a75ebc91f5cad3319d11d4ea18"
	I0514 00:18:03.199432    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999846    1520 topology_manager.go:215] "Topology Admit Handler" podUID="378d61cf78af695f1df41e321907a84d" podNamespace="kube-system" podName="kube-apiserver-multinode-101100"
	I0514 00:18:03.199432    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.000880    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5393de2704b2efef461d22fa52aa93c8" podNamespace="kube-system" podName="kube-controller-manager-multinode-101100"
	I0514 00:18:03.199432    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.002201    1520 topology_manager.go:215] "Topology Admit Handler" podUID="8083abd658221f47cabf81a00c4ca98e" podNamespace="kube-system" podName="kube-scheduler-multinode-101100"
	I0514 00:18:03.199432    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.004707    1520 topology_manager.go:215] "Topology Admit Handler" podUID="62d8afc7714e8ab65bff9675d120bb67" podNamespace="kube-system" podName="etcd-multinode-101100"
	I0514 00:18:03.199694    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007687    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb3b27edcd2a44b67fad4a74f438a62eec78b20422f6f952396053574dfb97e"
	I0514 00:18:03.199694    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007796    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da9268fd6556bae4d0109c5065588160bcf737c35e1e5df738d31786425c22ff"
	I0514 00:18:03.199781    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007891    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd694480978f356b61313108a6ff716a8d5f6e854fea1e4aa89a76a68d049f0"
	I0514 00:18:03.199781    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007938    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287e744a4dc2e511f4e40696c7d3b4193896c0c40a5bb527e569d1d3ec2cb908"
	I0514 00:18:03.199781    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.013966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad0550a5dabf16106fc2956251a65bccdc32f3f3be1f27246f675964fd548a1f"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.014759    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="400ms"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.031437    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.048649    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074775    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-ca-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074859    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074906    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-k8s-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074943    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074981    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-certs\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:03.200614    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075015    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-data\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:03.200733    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075045    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-k8s-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:03.200779    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075248    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-ca-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.200907    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075285    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-flexvolume-dir\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.201054    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075316    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-kubeconfig\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.201117    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075345    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8083abd658221f47cabf81a00c4ca98e-kubeconfig\") pod \"kube-scheduler-multinode-101100\" (UID: \"8083abd658221f47cabf81a00c4ca98e\") " pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:03.201218    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.111262    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:03.201255    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.112979    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:03.201297    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.416229    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="800ms"
	I0514 00:18:03.201297    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.515338    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:03.201340    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.516940    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:03.201423    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: W0514 00:16:50.730920    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201464    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.730993    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201507    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.074200    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201549    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.074270    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201549    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.076835    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b"
	I0514 00:18:03.201592    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.081775    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201654    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.081938    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201716    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.108133    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.218458    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="1.6s"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.318715    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.319804    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.367337    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.367409    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:52 multinode-101100 kubelet[1520]: I0514 00:16:52.921237    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086028    1520 kubelet_node_status.go:112] "Node was previously registered" node="multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.086698    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-101100\" already exists" pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086743    1520 kubelet_node_status.go:76] "Successfully registered node" node="multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.088971    1520 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.090614    1520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.091996    1520 setters.go:580] "Node became not ready" node="multinode-101100" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-14T00:16:55Z","lastTransitionTime":"2024-05-14T00:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.783435    1520 apiserver.go:52] "Watching apiserver"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788503    1520 topology_manager.go:215] "Topology Admit Handler" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4kmx4"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788795    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0" podNamespace="kube-system" podName="kindnet-9q2tv"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788932    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a9a488af-41ba-47f3-87b0-5a2f062afad6" podNamespace="kube-system" podName="kube-proxy-zhcz6"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789028    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247" podNamespace="kube-system" podName="storage-provisioner"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789184    1520 topology_manager.go:215] "Topology Admit Handler" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae" podNamespace="default" podName="busybox-fc5497c4f-xqj6w"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.789553    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789850    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-101100" podUID="1d9c79a4-1e4a-46fb-b3e8-02a4775f40af"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.790329    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-101100" podUID="cd31d030-75f8-4abb-bcad-34031cec7aa6"
	I0514 00:18:03.202264    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.794088    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.202304    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.798934    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-multinode-101100\" already exists" pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:03.202304    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.809466    1520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0514 00:18:03.202349    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.835196    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-101100"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857783    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-cni-cfg\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857845    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-xtables-lock\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857866    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-xtables-lock\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857954    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-lib-modules\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858020    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a92f04b8-a93f-42d8-81d7-d4da6bf2e247-tmp\") pod \"storage-provisioner\" (UID: \"a92f04b8-a93f-42d8-81d7-d4da6bf2e247\") " pod="kube-system/storage-provisioner"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858051    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-lib-modules\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859176    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859325    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.359260421 +0000 UTC m=+6.710289036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.873841    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.907826    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d9b35578220c9e99f77722d9aa294f" path="/var/lib/kubelet/pods/03d9b35578220c9e99f77722d9aa294f/volumes"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.910490    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af4b764a5249ff25d3c1c709387c273" path="/var/lib/kubelet/pods/1af4b764a5249ff25d3c1c709387c273/volumes"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917375    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917415    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917466    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.417450852 +0000 UTC m=+6.768479567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.964380    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-101100" podStartSLOduration=0.9643304 podStartE2EDuration="964.3304ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.964174289 +0000 UTC m=+6.315203004" watchObservedRunningTime="2024-05-14 00:16:55.9643304 +0000 UTC m=+6.315359015"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.985118    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-101100" podStartSLOduration=0.985100539 podStartE2EDuration="985.100539ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.984806519 +0000 UTC m=+6.335835134" watchObservedRunningTime="2024-05-14 00:16:55.985100539 +0000 UTC m=+6.336129154"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.362973    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.363041    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.363025821 +0000 UTC m=+7.714054436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.202904    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463836    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.202942    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463868    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.202990    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463923    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.46390701 +0000 UTC m=+7.814935725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.377986    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.378101    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.378049439 +0000 UTC m=+9.729078054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478290    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478356    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478448    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.478431994 +0000 UTC m=+9.829460709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899119    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899678    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.394980    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.395173    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.39515828 +0000 UTC m=+13.746186895 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496260    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496313    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496438    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.496350091 +0000 UTC m=+13.847378806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.891391    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.901591    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.914896    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.898894    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203551    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.899345    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203551    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445887    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445965    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.44595071 +0000 UTC m=+21.796979425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547258    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547292    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547346    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.547331033 +0000 UTC m=+21.898359648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.899515    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.900290    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:04 multinode-101100 kubelet[1520]: E0514 00:17:04.893282    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900260    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899212    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899658    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.895008    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899381    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899884    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508629    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508833    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.508813455 +0000 UTC m=+37.859842170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609334    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609455    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609579    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.609562102 +0000 UTC m=+37.960590817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899431    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899749    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.898578    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.899676    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:14 multinode-101100 kubelet[1520]: E0514 00:17:14.897029    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.899665    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.900476    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.766386    1520 scope.go:117] "RemoveContainer" containerID="9c4eb727cedb65853cc3a94fdcc3e267ed41cd9cb15ef1cc1bb84f6f2278c9c4"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.767364    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.767901    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-9q2tv_kube-system(5b3ee167-f21f-46b3-bace-03a7233717e0)\"" pod="kube-system/kindnet-9q2tv" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.898891    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.899300    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.898102    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899045    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.204698    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899315    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.204740    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900488    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.204740    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900677    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.204798    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899091    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.204798    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899625    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.204846    4316 command_runner.go:130] > May 14 00:17:24 multinode-101100 kubelet[1520]: E0514 00:17:24.899382    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.204893    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900463    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.204924    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900948    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.204962    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550622    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.205060    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550839    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.550821042 +0000 UTC m=+69.901849657 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.205099    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651942    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.205128    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651988    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.205195    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.652038    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.652024653 +0000 UTC m=+70.003053368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.205233    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.900302    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.205263    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.901190    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.205301    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.901408    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:03.205330    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.913749    1520 scope.go:117] "RemoveContainer" containerID="e6ee22ee5c1b88cb0b1190c646094aefe229bfbd4486f007cde2b36da39ca886"
	I0514 00:18:03.205330    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.914050    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:03.205369    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.914651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a92f04b8-a93f-42d8-81d7-d4da6bf2e247)\"" pod="kube-system/storage-provisioner" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247"
	I0514 00:18:03.205398    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.898652    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.205472    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.899154    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.205472    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.900744    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.900407    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.902295    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.898560    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.899627    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:39 multinode-101100 kubelet[1520]: I0514 00:17:39.899892    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.888753    1520 scope.go:117] "RemoveContainer" containerID="eda79d47d28ffbc726bec7eaad072eeebb31ec439ed9bbe9fd544b9913b8f3ea"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: E0514 00:17:49.924547    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.932695    1520 scope.go:117] "RemoveContainer" containerID="06f1a683cad8348fc4f8e339f226bbda12c4e8c1025c7acb52e2792253dd3008"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.478966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.543407    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d"
	I0514 00:18:03.242705    4316 logs.go:123] Gathering logs for describe nodes ...
	I0514 00:18:03.242705    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0514 00:18:03.495448    4316 command_runner.go:130] > Name:               multinode-101100
	I0514 00:18:03.495448    4316 command_runner.go:130] > Roles:              control-plane
	I0514 00:18:03.495448    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_56_10_0700
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0514 00:18:03.495448    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:03.495448    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:56:06 +0000
	I0514 00:18:03.495448    4316 command_runner.go:130] > Taints:             <none>
	I0514 00:18:03.495448    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:03.495448    4316 command_runner.go:130] > Lease:
	I0514 00:18:03.495448    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100
	I0514 00:18:03.495448    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:03.495448    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:17:56 +0000
	I0514 00:18:03.495448    4316 command_runner.go:130] > Conditions:
	I0514 00:18:03.495448    4316 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0514 00:18:03.495448    4316 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0514 00:18:03.495448    4316 command_runner.go:130] >   MemoryPressure   False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0514 00:18:03.495448    4316 command_runner.go:130] >   DiskPressure     False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0514 00:18:03.495448    4316 command_runner.go:130] >   PIDPressure      False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0514 00:18:03.495448    4316 command_runner.go:130] >   Ready            True    Tue, 14 May 2024 00:17:35 +0000   Tue, 14 May 2024 00:17:35 +0000   KubeletReady                 kubelet is posting ready status
	I0514 00:18:03.495448    4316 command_runner.go:130] > Addresses:
	I0514 00:18:03.495448    4316 command_runner.go:130] >   InternalIP:  172.23.102.122
	I0514 00:18:03.495448    4316 command_runner.go:130] >   Hostname:    multinode-101100
	I0514 00:18:03.495448    4316 command_runner.go:130] > Capacity:
	I0514 00:18:03.495448    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.495448    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.495448    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.495448    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.495448    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.495448    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:03.496459    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.496459    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.496459    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.496459    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.496459    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.496628    4316 command_runner.go:130] > System Info:
	I0514 00:18:03.496628    4316 command_runner.go:130] >   Machine ID:                 5110a322e7104904905e303a94b950b6
	I0514 00:18:03.496628    4316 command_runner.go:130] >   System UUID:                9b23fe4d-6d34-444b-8185-a84d51d23610
	I0514 00:18:03.496628    4316 command_runner.go:130] >   Boot ID:                    2e73d191-2dbe-4055-a17d-cff8a9e53a15
	I0514 00:18:03.496799    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:03.496799    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:03.496799    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:03.496799    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:03.496799    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:03.496956    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:03.496956    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:03.497068    4316 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0514 00:18:03.497068    4316 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0514 00:18:03.497134    4316 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0514 00:18:03.497134    4316 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:03.497134    4316 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0514 00:18:03.497276    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-xqj6w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:03.497389    4316 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4kmx4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	I0514 00:18:03.497448    4316 command_runner.go:130] >   kube-system                 etcd-multinode-101100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         68s
	I0514 00:18:03.497522    4316 command_runner.go:130] >   kube-system                 kindnet-9q2tv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0514 00:18:03.497584    4316 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-101100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	I0514 00:18:03.497619    4316 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-101100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:03.497764    4316 command_runner.go:130] >   kube-system                 kube-proxy-zhcz6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:03.497764    4316 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-101100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:03.497764    4316 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:03.497902    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:03.497902    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:03.497902    4316 command_runner.go:130] >   Resource           Requests     Limits
	I0514 00:18:03.498069    4316 command_runner.go:130] >   --------           --------     ------
	I0514 00:18:03.498069    4316 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0514 00:18:03.498069    4316 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0514 00:18:03.498486    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:03.498519    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:03.498615    4316 command_runner.go:130] > Events:
	I0514 00:18:03.498615    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:03.498787    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:03.498830    4316 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0514 00:18:03.498830    4316 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0514 00:18:03.498938    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:03.499017    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.499104    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:03.499149    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.499266    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m                kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:03.499355    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.499389    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m                kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.499516    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m                kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:03.499516    4316 command_runner.go:130] >   Normal  Starting                 21m                kubelet          Starting kubelet.
	I0514 00:18:03.499656    4316 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:03.499691    4316 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-101100 status is now: NodeReady
	I0514 00:18:03.499777    4316 command_runner.go:130] >   Normal  Starting                 74s                kubelet          Starting kubelet.
	I0514 00:18:03.499857    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.499955    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:03.500042    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.500042    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     73s (x7 over 74s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:03.500174    4316 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:03.500174    4316 command_runner.go:130] > Name:               multinode-101100-m02
	I0514 00:18:03.500174    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:03.500325    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:03.500361    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:03.500458    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:03.500496    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m02
	I0514 00:18:03.500496    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:03.500575    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:03.500670    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:03.500670    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:03.500767    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_59_02_0700
	I0514 00:18:03.500808    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:03.500906    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:03.500906    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:03.500948    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:03.501088    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:59:02 +0000
	I0514 00:18:03.501088    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:03.501187    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:03.501229    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:03.501297    4316 command_runner.go:130] > Lease:
	I0514 00:18:03.501297    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m02
	I0514 00:18:03.501297    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:03.501437    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:52 +0000
	I0514 00:18:03.501437    4316 command_runner.go:130] > Conditions:
	I0514 00:18:03.501535    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:03.501578    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:03.501672    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.501714    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.501851    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.501943    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.501985    4316 command_runner.go:130] > Addresses:
	I0514 00:18:03.501985    4316 command_runner.go:130] >   InternalIP:  172.23.109.58
	I0514 00:18:03.501985    4316 command_runner.go:130] >   Hostname:    multinode-101100-m02
	I0514 00:18:03.502084    4316 command_runner.go:130] > Capacity:
	I0514 00:18:03.502126    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.502126    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.502126    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.502225    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.502225    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.502337    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:03.502337    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.502337    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.502435    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.502479    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.502577    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.502577    4316 command_runner.go:130] > System Info:
	I0514 00:18:03.502619    4316 command_runner.go:130] >   Machine ID:                 8d348bb1bbc048f4b99c681873b42d63
	I0514 00:18:03.502716    4316 command_runner.go:130] >   System UUID:                4330851b-5248-f245-9378-5fc25e670b55
	I0514 00:18:03.502759    4316 command_runner.go:130] >   Boot ID:                    9f102be6-1468-4570-8696-97e5ce51649a
	I0514 00:18:03.502759    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:03.502884    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:03.502963    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:03.502963    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:03.502963    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:03.503071    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:03.503071    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:03.503071    4316 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0514 00:18:03.503165    4316 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0514 00:18:03.503165    4316 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0514 00:18:03.503250    4316 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:03.503343    4316 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0514 00:18:03.503343    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-q7442    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:03.503430    4316 command_runner.go:130] >   kube-system                 kindnet-2lwsm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	I0514 00:18:03.503522    4316 command_runner.go:130] >   kube-system                 kube-proxy-b25hq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0514 00:18:03.503522    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:03.503609    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:03.503609    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:03.503609    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:03.503703    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:03.503801    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:03.503801    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:03.503801    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:03.503801    4316 command_runner.go:130] > Events:
	I0514 00:18:03.503903    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:03.503994    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:03.503994    4316 command_runner.go:130] >   Normal  Starting                 18m                kube-proxy       
	I0514 00:18:03.504087    4316 command_runner.go:130] >   Normal  RegisteredNode           19m                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:03.504183    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	I0514 00:18:03.504183    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.504276    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	I0514 00:18:03.504363    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.504363    4316 command_runner.go:130] >   Normal  NodeReady                18m                kubelet          Node multinode-101100-m02 status is now: NodeReady
	I0514 00:18:03.504455    4316 command_runner.go:130] >   Normal  NodeNotReady             3m31s              node-controller  Node multinode-101100-m02 status is now: NodeNotReady
	I0514 00:18:03.504539    4316 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:03.504629    4316 command_runner.go:130] > Name:               multinode-101100-m03
	I0514 00:18:03.504629    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:03.504728    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:03.504728    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:03.504804    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:03.504929    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m03
	I0514 00:18:03.505004    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:03.505004    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_14T00_12_45_0700
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:03.505087    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:03.505087    4316 command_runner.go:130] > CreationTimestamp:  Tue, 14 May 2024 00:12:44 +0000
	I0514 00:18:03.505087    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:03.505087    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:03.505087    4316 command_runner.go:130] > Lease:
	I0514 00:18:03.505087    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m03
	I0514 00:18:03.505087    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:03.505087    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:36 +0000
	I0514 00:18:03.505087    4316 command_runner.go:130] > Conditions:
	I0514 00:18:03.505087    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:03.505087    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:03.505087    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.505087    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.505087    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.505087    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.505087    4316 command_runner.go:130] > Addresses:
	I0514 00:18:03.505087    4316 command_runner.go:130] >   InternalIP:  172.23.102.231
	I0514 00:18:03.505087    4316 command_runner.go:130] >   Hostname:    multinode-101100-m03
	I0514 00:18:03.505087    4316 command_runner.go:130] > Capacity:
	I0514 00:18:03.505087    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.505087    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.505087    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.505087    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.505087    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.505646    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:03.505646    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.505827    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.505827    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.505827    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.505827    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.505827    4316 command_runner.go:130] > System Info:
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Machine ID:                 11c3fac528de4278b1dafef49e54ea09
	I0514 00:18:03.505827    4316 command_runner.go:130] >   System UUID:                0ee228e5-87a6-0549-9a8d-1747b73431ee
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Boot ID:                    d5c1e04c-3081-4871-912e-a86507b8e24a
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:03.505827    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:03.505827    4316 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0514 00:18:03.505827    4316 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0514 00:18:03.505827    4316 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:03.506365    4316 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0514 00:18:03.506400    4316 command_runner.go:130] >   kube-system                 kindnet-tfbt8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	I0514 00:18:03.506400    4316 command_runner.go:130] >   kube-system                 kube-proxy-8zsgn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	I0514 00:18:03.506400    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:03.506400    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:03.506400    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:03.506400    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:03.506400    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:03.506400    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:03.506400    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:03.506400    4316 command_runner.go:130] > Events:
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0514 00:18:03.506400    4316 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Normal  Starting                 5m16s                  kube-proxy       
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Normal  Starting                 14m                    kube-proxy       
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.506926    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeReady                14m                    kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  Starting                 5m19s                  kubelet          Starting kubelet.
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m19s (x2 over 5m19s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m19s (x2 over 5m19s)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m19s (x2 over 5m19s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  RegisteredNode           5m16s                  node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeReady                5m14s                  kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeNotReady             3m46s                  node-controller  Node multinode-101100-m03 status is now: NodeNotReady
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:03.518924    4316 logs.go:123] Gathering logs for etcd [08450c853590] ...
	I0514 00:18:03.518924    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08450c853590"
	I0514 00:18:03.551355    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.687231Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:03.551834    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.691397Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.102.122:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.102.122:2380","--initial-cluster=multinode-101100=https://172.23.102.122:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.102.122:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.102.122:2380","--name=multinode-101100","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0514 00:18:03.551834    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.692425Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0514 00:18:03.551834    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.693634Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:03.551910    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.693771Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.102.122:2380"]}
	I0514 00:18:03.551948    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.694117Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:03.551948    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.703219Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"]}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.704312Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-101100","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.7264Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.905879ms"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.748539Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.766395Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","commit-index":1898}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=()"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became follower at term 2"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.768086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6e4c15c3d0f3380f [peers: [], term: 2, commit: 1898, applied: 0, lastindex: 1898, lastterm: 2]"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.782157Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.786938Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1096}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.797876Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1653}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.80426Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.81216Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6e4c15c3d0f3380f","timeout":"7s"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.813213Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6e4c15c3d0f3380f"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.814234Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"6e4c15c3d0f3380f","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.815302Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816695Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816877Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=(7947751373170489359)"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817687Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","added-peer-id":"6e4c15c3d0f3380f","added-peer-peer-urls":["https://172.23.106.39:2380"]}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","cluster-version":"3.5"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.818648Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0514 00:18:03.552509    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.83299Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:03.552583    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.834951Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6e4c15c3d0f3380f","initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0514 00:18:03.552620    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835138Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0514 00:18:03.552620    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835469Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.102.122:2380"}
	I0514 00:18:03.552661    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835603Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.102.122:2380"}
	I0514 00:18:03.552661    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.468953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f is starting a new election at term 2"}
	I0514 00:18:03.552700    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became pre-candidate at term 2"}
	I0514 00:18:03.552700    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgPreVoteResp from 6e4c15c3d0f3380f at term 2"}
	I0514 00:18:03.552739    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became candidate at term 3"}
	I0514 00:18:03.552739    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgVoteResp from 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:03.552778    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became leader at term 3"}
	I0514 00:18:03.552819    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6e4c15c3d0f3380f elected leader 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:03.552819    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479025Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6e4c15c3d0f3380f","local-member-attributes":"{Name:multinode-101100 ClientURLs:[https://172.23.102.122:2379]}","request-path":"/0/members/6e4c15c3d0f3380f/attributes","cluster-id":"bb849d1df0b559d7","publish-timeout":"7s"}
	I0514 00:18:03.552857    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:03.552898    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:03.552898    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0514 00:18:03.552936    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0514 00:18:03.552936    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.102.122:2379"}
	I0514 00:18:03.552975    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0514 00:18:03.564444    4316 logs.go:123] Gathering logs for dmesg ...
	I0514 00:18:03.564444    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0514 00:18:03.586007    4316 command_runner.go:130] > [May14 00:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0514 00:18:03.586052    4316 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0514 00:18:03.586052    4316 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0514 00:18:03.586151    4316 command_runner.go:130] > [  +0.104207] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0514 00:18:03.586151    4316 command_runner.go:130] > [  +0.023601] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0514 00:18:03.586207    4316 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0514 00:18:03.586207    4316 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0514 00:18:03.586282    4316 command_runner.go:130] > [  +0.058832] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0514 00:18:03.586311    4316 command_runner.go:130] > [  +0.024495] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0514 00:18:03.586349    4316 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0514 00:18:03.586405    4316 command_runner.go:130] > [  +5.692465] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0514 00:18:03.586405    4316 command_runner.go:130] > [  +0.707713] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0514 00:18:03.586448    4316 command_runner.go:130] > [  +1.789899] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0514 00:18:03.586489    4316 command_runner.go:130] > [  +7.282690] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0514 00:18:03.586489    4316 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0514 00:18:03.586531    4316 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0514 00:18:03.586571    4316 command_runner.go:130] > [May14 00:16] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	I0514 00:18:03.586571    4316 command_runner.go:130] > [  +0.158382] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	I0514 00:18:03.586614    4316 command_runner.go:130] > [ +23.750429] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	I0514 00:18:03.586614    4316 command_runner.go:130] > [  +0.111929] kauditd_printk_skb: 73 callbacks suppressed
	I0514 00:18:03.586654    4316 command_runner.go:130] > [  +0.464883] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	I0514 00:18:03.586690    4316 command_runner.go:130] > [  +0.164872] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0514 00:18:03.586729    4316 command_runner.go:130] > [  +0.194348] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	I0514 00:18:03.586729    4316 command_runner.go:130] > [  +2.832176] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	I0514 00:18:03.586772    4316 command_runner.go:130] > [  +0.181315] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	I0514 00:18:03.586772    4316 command_runner.go:130] > [  +0.160798] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	I0514 00:18:03.586824    4316 command_runner.go:130] > [  +0.238904] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	I0514 00:18:03.586824    4316 command_runner.go:130] > [  +0.787359] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0514 00:18:03.586870    4316 command_runner.go:130] > [  +0.085936] kauditd_printk_skb: 205 callbacks suppressed
	I0514 00:18:03.586870    4316 command_runner.go:130] > [  +3.384697] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	I0514 00:18:03.586910    4316 command_runner.go:130] > [  +1.802132] kauditd_printk_skb: 64 callbacks suppressed
	I0514 00:18:03.586910    4316 command_runner.go:130] > [  +5.213940] kauditd_printk_skb: 10 callbacks suppressed
	I0514 00:18:03.586965    4316 command_runner.go:130] > [  +3.471694] systemd-fstab-generator[2315]: Ignoring "noauto" option for root device
	I0514 00:18:03.586965    4316 command_runner.go:130] > [May14 00:17] kauditd_printk_skb: 70 callbacks suppressed
	I0514 00:18:03.588869    4316 logs.go:123] Gathering logs for kube-proxy [b2a1b31cd7de] ...
	I0514 00:18:03.588869    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a1b31cd7de"
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.528613       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.562847       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.122"]
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.701301       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.701447       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.701476       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.708219       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:03.618193    4316 command_runner.go:130] ! I0514 00:16:57.708800       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:03.618193    4316 command_runner.go:130] ! I0514 00:16:57.708841       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.618193    4316 command_runner.go:130] ! I0514 00:16:57.712422       1 config.go:192] "Starting service config controller"
	I0514 00:18:03.618272    4316 command_runner.go:130] ! I0514 00:16:57.712733       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:03.618322    4316 command_runner.go:130] ! I0514 00:16:57.712780       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:03.618370    4316 command_runner.go:130] ! I0514 00:16:57.712824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:03.618370    4316 command_runner.go:130] ! I0514 00:16:57.715339       1 config.go:319] "Starting node config controller"
	I0514 00:18:03.618428    4316 command_runner.go:130] ! I0514 00:16:57.717651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:03.618428    4316 command_runner.go:130] ! I0514 00:16:57.815732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:03.618428    4316 command_runner.go:130] ! I0514 00:16:57.815811       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:03.618500    4316 command_runner.go:130] ! I0514 00:16:57.818050       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:03.621258    4316 logs.go:123] Gathering logs for kube-controller-manager [b87239d1199a] ...
	I0514 00:18:03.621306    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87239d1199a"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.414723       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.798318       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.798456       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.802364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.802939       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.803159       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.803510       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:56.867503       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:56.868219       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:56.874269       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:56.878308       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:56.878330       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:03.646082    4316 command_runner.go:130] ! I0514 00:16:56.878409       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:03.646082    4316 command_runner.go:130] ! I0514 00:16:56.878509       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:03.646082    4316 command_runner.go:130] ! I0514 00:16:56.878517       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:03.646082    4316 command_runner.go:130] ! I0514 00:16:56.882632       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:03.646203    4316 command_runner.go:130] ! I0514 00:16:56.882648       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:03.646203    4316 command_runner.go:130] ! I0514 00:16:56.882656       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:03.646203    4316 command_runner.go:130] ! I0514 00:16:56.883478       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:03.646203    4316 command_runner.go:130] ! I0514 00:16:56.883488       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:03.646203    4316 command_runner.go:130] ! I0514 00:16:56.883496       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:03.646322    4316 command_runner.go:130] ! I0514 00:16:56.885766       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:03.646322    4316 command_runner.go:130] ! I0514 00:16:56.888273       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:03.646322    4316 command_runner.go:130] ! I0514 00:16:56.888463       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:03.646411    4316 command_runner.go:130] ! I0514 00:16:56.889304       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:03.646411    4316 command_runner.go:130] ! I0514 00:16:56.890244       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:03.646411    4316 command_runner.go:130] ! I0514 00:16:56.890408       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:03.646411    4316 command_runner.go:130] ! I0514 00:16:56.893619       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.903162       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.903183       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.969340       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.982656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.982729       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.983268       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:03.646644    4316 command_runner.go:130] ! I0514 00:16:56.983299       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:03.646644    4316 command_runner.go:130] ! I0514 00:16:56.983354       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:03.646732    4316 command_runner.go:130] ! I0514 00:16:56.983426       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:03.646732    4316 command_runner.go:130] ! I0514 00:16:56.983451       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:03.646732    4316 command_runner.go:130] ! W0514 00:16:56.983466       1 shared_informer.go:597] resyncPeriod 15h46m20.096782659s is smaller than resyncCheckPeriod 18h37m10.298700604s and the informer has already started. Changing it to 18h37m10.298700604s
	I0514 00:18:03.646732    4316 command_runner.go:130] ! I0514 00:16:56.983922       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:03.646822    4316 command_runner.go:130] ! I0514 00:16:56.984377       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:03.646822    4316 command_runner.go:130] ! I0514 00:16:56.984435       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:03.646822    4316 command_runner.go:130] ! I0514 00:16:56.984460       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:03.646822    4316 command_runner.go:130] ! I0514 00:16:56.984478       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:03.646908    4316 command_runner.go:130] ! I0514 00:16:56.984528       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:03.646943    4316 command_runner.go:130] ! I0514 00:16:56.984568       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.984736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.985288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.995607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.996188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.997004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.997141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.997174       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.997363       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.997373       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:57.003479       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:57.004086       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:57.004336       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:57.004348       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.031733       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.032143       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.032242       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.032648       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.034995       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.035109       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.035510       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.035544       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.035551       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.038183       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.038394       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.039212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.040784       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.041050       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.041194       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.043909       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.044044       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:03.647823    4316 command_runner.go:130] ! I0514 00:17:07.044106       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:03.647823    4316 command_runner.go:130] ! I0514 00:17:07.059101       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:03.647823    4316 command_runner.go:130] ! I0514 00:17:07.059352       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:03.647924    4316 command_runner.go:130] ! I0514 00:17:07.059503       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:03.647924    4316 command_runner.go:130] ! I0514 00:17:07.062189       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:03.647924    4316 command_runner.go:130] ! I0514 00:17:07.062615       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:03.647924    4316 command_runner.go:130] ! I0514 00:17:07.062641       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:03.647924    4316 command_runner.go:130] ! I0514 00:17:07.070971       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:03.647991    4316 command_runner.go:130] ! I0514 00:17:07.071021       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:03.647991    4316 command_runner.go:130] ! I0514 00:17:07.071151       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:03.648018    4316 command_runner.go:130] ! I0514 00:17:07.071293       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:03.648018    4316 command_runner.go:130] ! I0514 00:17:07.071328       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:03.648018    4316 command_runner.go:130] ! I0514 00:17:07.071388       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:03.648018    4316 command_runner.go:130] ! I0514 00:17:07.083342       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:03.648018    4316 command_runner.go:130] ! I0514 00:17:07.084321       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:03.648097    4316 command_runner.go:130] ! I0514 00:17:07.084474       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:03.648097    4316 command_runner.go:130] ! I0514 00:17:07.085952       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:03.648097    4316 command_runner.go:130] ! I0514 00:17:07.086347       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:03.648097    4316 command_runner.go:130] ! I0514 00:17:07.086569       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:03.648097    4316 command_runner.go:130] ! I0514 00:17:07.088414       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:03.648162    4316 command_runner.go:130] ! I0514 00:17:07.088731       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:03.648188    4316 command_runner.go:130] ! I0514 00:17:07.089444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:03.648188    4316 command_runner.go:130] ! I0514 00:17:07.091486       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:03.648188    4316 command_runner.go:130] ! I0514 00:17:07.091650       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:03.648188    4316 command_runner.go:130] ! I0514 00:17:07.091678       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:03.648188    4316 command_runner.go:130] ! I0514 00:17:07.094570       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:03.648266    4316 command_runner.go:130] ! I0514 00:17:07.095467       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:03.648266    4316 command_runner.go:130] ! I0514 00:17:07.095818       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:03.648266    4316 command_runner.go:130] ! I0514 00:17:07.097778       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:03.648266    4316 command_runner.go:130] ! I0514 00:17:07.098911       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:03.648266    4316 command_runner.go:130] ! I0514 00:17:07.098939       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:03.648332    4316 command_runner.go:130] ! I0514 00:17:07.100648       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.101514       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.101659       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.103436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.103908       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.109194       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.109267       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:03.648437    4316 command_runner.go:130] ! I0514 00:17:07.109496       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:03.648437    4316 command_runner.go:130] ! I0514 00:17:07.113760       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:03.648437    4316 command_runner.go:130] ! I0514 00:17:07.114024       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:03.648437    4316 command_runner.go:130] ! I0514 00:17:07.114252       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:03.648437    4316 command_runner.go:130] ! I0514 00:17:07.115259       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.116925       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.117254       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.117353       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.121368       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.121764       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.121788       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122156       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122248       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122301       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122371       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122432       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122464       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122706       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.123282       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.123678       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.126535       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.126692       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:03.648502    4316 command_runner.go:130] ! E0514 00:17:07.165594       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.165634       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.218097       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.218271       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.218379       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.218721       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.265917       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.266033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.266045       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.315398       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.315511       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.315534       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.415899       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.416022       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.465981       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.466026       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.466177       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.466545       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.516337       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.516498       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:03.649124    4316 command_runner.go:130] ! I0514 00:17:07.516515       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:03.649124    4316 command_runner.go:130] ! I0514 00:17:07.567477       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:03.649124    4316 command_runner.go:130] ! I0514 00:17:07.567616       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:03.649124    4316 command_runner.go:130] ! I0514 00:17:07.567627       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:03.649175    4316 command_runner.go:130] ! I0514 00:17:07.617346       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:03.649175    4316 command_runner.go:130] ! I0514 00:17:07.617464       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:03.649175    4316 command_runner.go:130] ! I0514 00:17:07.617476       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:03.649175    4316 command_runner.go:130] ! E0514 00:17:07.665765       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:03.649175    4316 command_runner.go:130] ! I0514 00:17:07.665865       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:03.649372    4316 command_runner.go:130] ! I0514 00:17:07.665876       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:03.649372    4316 command_runner.go:130] ! I0514 00:17:07.671623       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:03.649372    4316 command_runner.go:130] ! I0514 00:17:07.693623       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:03.649372    4316 command_runner.go:130] ! I0514 00:17:07.703208       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:03.649454    4316 command_runner.go:130] ! I0514 00:17:07.707002       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:03.649454    4316 command_runner.go:130] ! I0514 00:17:07.707898       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:03.649454    4316 command_runner.go:130] ! I0514 00:17:07.708010       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:03.649454    4316 command_runner.go:130] ! I0514 00:17:07.708168       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:03.649513    4316 command_runner.go:130] ! I0514 00:17:07.710800       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:03.649513    4316 command_runner.go:130] ! I0514 00:17:07.710879       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:03.649513    4316 command_runner.go:130] ! I0514 00:17:07.716140       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.716709       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.717695       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.717710       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.718924       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.723267       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.723378       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.723467       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:03.649628    4316 command_runner.go:130] ! I0514 00:17:07.723495       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:03.649628    4316 command_runner.go:130] ! I0514 00:17:07.726980       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:03.649628    4316 command_runner.go:130] ! I0514 00:17:07.733271       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:03.649628    4316 command_runner.go:130] ! I0514 00:17:07.733445       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:03.649628    4316 command_runner.go:130] ! I0514 00:17:07.733467       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.733473       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.733480       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.739996       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.742032       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.744959       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.760453       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.762790       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.766175       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.767750       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.768151       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.779225       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.779406       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.784902       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.787441       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.790178       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.791571       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.797318       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.816750       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.836762       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.837127       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.869081       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:03.649969    4316 command_runner.go:130] ! I0514 00:17:07.869544       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:03.649969    4316 command_runner.go:130] ! I0514 00:17:07.869413       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.870789       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.898670       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.901033       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.904366       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.916125       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.977330       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:07.988956       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.134754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="230.307102ms"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.134896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.6µs"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.140785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="234.508146ms"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.140977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.3µs"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.412419       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.472034       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.472384       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:37.878702       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:18:01.608725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.75856ms"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:18:01.608844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.702µs"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:18:01.651304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.008µs"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:18:01.710123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.783088ms"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:18:01.711762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.302µs"
	I0514 00:18:03.663561    4316 logs.go:123] Gathering logs for kube-controller-manager [e96f94398d6d] ...
	I0514 00:18:03.663561    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e96f94398d6d"
	I0514 00:18:03.699380    4316 command_runner.go:130] ! I0513 23:56:04.448604       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.932336       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.932378       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.934044       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.934133       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.934796       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.935005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:09.124957       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:03.700550    4316 command_runner.go:130] ! I0513 23:56:09.125092       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:03.700550    4316 command_runner.go:130] ! I0513 23:56:09.140996       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:03.700617    4316 command_runner.go:130] ! I0513 23:56:09.141447       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:03.700617    4316 command_runner.go:130] ! I0513 23:56:09.141567       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:03.700617    4316 command_runner.go:130] ! I0513 23:56:09.156847       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:03.700676    4316 command_runner.go:130] ! I0513 23:56:09.157241       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:03.700676    4316 command_runner.go:130] ! I0513 23:56:09.157455       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:03.700732    4316 command_runner.go:130] ! I0513 23:56:09.170795       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:03.700773    4316 command_runner.go:130] ! I0513 23:56:09.171005       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:03.700773    4316 command_runner.go:130] ! I0513 23:56:09.171684       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:03.700830    4316 command_runner.go:130] ! I0513 23:56:09.171921       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:03.700830    4316 command_runner.go:130] ! I0513 23:56:09.172144       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:03.700927    4316 command_runner.go:130] ! I0513 23:56:09.183975       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:03.700977    4316 command_runner.go:130] ! I0513 23:56:09.184362       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:03.700977    4316 command_runner.go:130] ! I0513 23:56:09.185233       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:03.701022    4316 command_runner.go:130] ! I0513 23:56:09.230173       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:03.701022    4316 command_runner.go:130] ! I0513 23:56:09.242679       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:03.701022    4316 command_runner.go:130] ! I0513 23:56:09.242735       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:03.701093    4316 command_runner.go:130] ! I0513 23:56:09.242821       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:03.701093    4316 command_runner.go:130] ! I0513 23:56:09.249513       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:03.701143    4316 command_runner.go:130] ! I0513 23:56:09.249614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:03.701143    4316 command_runner.go:130] ! I0513 23:56:09.249731       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:03.701207    4316 command_runner.go:130] ! I0513 23:56:09.249824       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:03.701207    4316 command_runner.go:130] ! I0513 23:56:09.249912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250425       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250604       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250745       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250794       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250994       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.251028       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.251909       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.251999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.252142       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.305089       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.305302       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.305357       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.305376       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.321907       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.322244       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.322270       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.324160       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.324208       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! E0513 23:56:09.334850       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.335135       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.346530       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.346809       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.346883       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.385297       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.385391       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:03.701808    4316 command_runner.go:130] ! I0513 23:56:09.385403       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:03.701867    4316 command_runner.go:130] ! I0513 23:56:09.542113       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:03.701867    4316 command_runner.go:130] ! I0513 23:56:09.542271       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:03.701930    4316 command_runner.go:130] ! I0513 23:56:09.542284       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:03.701930    4316 command_runner.go:130] ! I0513 23:56:09.581300       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:03.701989    4316 command_runner.go:130] ! I0513 23:56:09.581321       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:03.701989    4316 command_runner.go:130] ! I0513 23:56:09.581454       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.702050    4316 command_runner.go:130] ! I0513 23:56:09.581971       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:03.702125    4316 command_runner.go:130] ! I0513 23:56:09.582008       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:03.702125    4316 command_runner.go:130] ! I0513 23:56:09.582030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.702182    4316 command_runner.go:130] ! I0513 23:56:09.582896       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:03.702182    4316 command_runner.go:130] ! I0513 23:56:09.582908       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:03.702253    4316 command_runner.go:130] ! I0513 23:56:09.582922       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.702312    4316 command_runner.go:130] ! I0513 23:56:09.583436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:03.702312    4316 command_runner.go:130] ! I0513 23:56:09.583678       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:03.702374    4316 command_runner.go:130] ! I0513 23:56:09.583691       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:03.702374    4316 command_runner.go:130] ! I0513 23:56:09.583727       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.702450    4316 command_runner.go:130] ! I0513 23:56:09.734073       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:03.702450    4316 command_runner.go:130] ! I0513 23:56:09.734159       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:03.702516    4316 command_runner.go:130] ! I0513 23:56:09.734446       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:03.702516    4316 command_runner.go:130] ! I0513 23:56:09.885354       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:03.702574    4316 command_runner.go:130] ! I0513 23:56:09.885756       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:03.702574    4316 command_runner.go:130] ! I0513 23:56:09.885934       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:03.702631    4316 command_runner.go:130] ! I0513 23:56:10.040288       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:03.702631    4316 command_runner.go:130] ! I0513 23:56:10.040486       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:03.702681    4316 command_runner.go:130] ! I0513 23:56:20.090311       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:03.702737    4316 command_runner.go:130] ! I0513 23:56:20.090418       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:03.702737    4316 command_runner.go:130] ! I0513 23:56:20.090428       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:03.702800    4316 command_runner.go:130] ! I0513 23:56:20.090911       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:03.702800    4316 command_runner.go:130] ! I0513 23:56:20.091093       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:03.702859    4316 command_runner.go:130] ! I0513 23:56:20.101598       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:03.702859    4316 command_runner.go:130] ! I0513 23:56:20.101778       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:03.702909    4316 command_runner.go:130] ! I0513 23:56:20.101805       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:03.702909    4316 command_runner.go:130] ! I0513 23:56:20.114509       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:03.702964    4316 command_runner.go:130] ! I0513 23:56:20.114580       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:03.703013    4316 command_runner.go:130] ! I0513 23:56:20.114849       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:03.703013    4316 command_runner.go:130] ! I0513 23:56:20.114678       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:03.703068    4316 command_runner.go:130] ! I0513 23:56:20.115038       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:03.703068    4316 command_runner.go:130] ! I0513 23:56:20.115048       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:03.703116    4316 command_runner.go:130] ! E0513 23:56:20.117646       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:03.703183    4316 command_runner.go:130] ! I0513 23:56:20.117865       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:03.703183    4316 command_runner.go:130] ! I0513 23:56:20.130498       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:03.703232    4316 command_runner.go:130] ! I0513 23:56:20.130711       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:03.703232    4316 command_runner.go:130] ! I0513 23:56:20.130932       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:03.703281    4316 command_runner.go:130] ! I0513 23:56:20.143035       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:03.703321    4316 command_runner.go:130] ! I0513 23:56:20.143414       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:03.703371    4316 command_runner.go:130] ! I0513 23:56:20.143607       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:03.703371    4316 command_runner.go:130] ! I0513 23:56:20.160023       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:03.703454    4316 command_runner.go:130] ! I0513 23:56:20.160191       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:03.703475    4316 command_runner.go:130] ! I0513 23:56:20.160215       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:03.703514    4316 command_runner.go:130] ! I0513 23:56:20.170613       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:03.703569    4316 command_runner.go:130] ! I0513 23:56:20.170951       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:03.703609    4316 command_runner.go:130] ! I0513 23:56:20.171064       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:03.703660    4316 command_runner.go:130] ! I0513 23:56:20.179840       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:03.703706    4316 command_runner.go:130] ! I0513 23:56:20.180447       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:03.703706    4316 command_runner.go:130] ! I0513 23:56:20.180590       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:03.703741    4316 command_runner.go:130] ! I0513 23:56:20.190977       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:03.703781    4316 command_runner.go:130] ! I0513 23:56:20.191286       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:03.703781    4316 command_runner.go:130] ! I0513 23:56:20.191448       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:03.703828    4316 command_runner.go:130] ! I0513 23:56:20.204888       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:03.703913    4316 command_runner.go:130] ! I0513 23:56:20.205578       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:03.703963    4316 command_runner.go:130] ! I0513 23:56:20.205670       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:03.703963    4316 command_runner.go:130] ! I0513 23:56:20.239034       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:03.704004    4316 command_runner.go:130] ! I0513 23:56:20.239193       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:03.704004    4316 command_runner.go:130] ! I0513 23:56:20.239262       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:03.704084    4316 command_runner.go:130] ! I0513 23:56:20.482568       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:03.704084    4316 command_runner.go:130] ! I0513 23:56:20.486046       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:03.704137    4316 command_runner.go:130] ! I0513 23:56:20.486073       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:03.704177    4316 command_runner.go:130] ! I0513 23:56:20.486093       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:03.704177    4316 command_runner.go:130] ! I0513 23:56:20.786163       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:03.704255    4316 command_runner.go:130] ! I0513 23:56:20.786358       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:03.704255    4316 command_runner.go:130] ! I0513 23:56:21.082938       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:03.704304    4316 command_runner.go:130] ! I0513 23:56:21.083657       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:03.704346    4316 command_runner.go:130] ! I0513 23:56:21.083743       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:03.704391    4316 command_runner.go:130] ! I0513 23:56:21.238006       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:03.704425    4316 command_runner.go:130] ! I0513 23:56:21.238099       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:03.704516    4316 command_runner.go:130] ! I0513 23:56:21.238152       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:03.704562    4316 command_runner.go:130] ! I0513 23:56:21.238163       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:03.704562    4316 command_runner.go:130] ! I0513 23:56:21.283674       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:03.704596    4316 command_runner.go:130] ! I0513 23:56:21.283751       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:03.704596    4316 command_runner.go:130] ! I0513 23:56:21.283986       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:03.704644    4316 command_runner.go:130] ! I0513 23:56:21.284217       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:03.704686    4316 command_runner.go:130] ! I0513 23:56:21.442664       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:03.704686    4316 command_runner.go:130] ! I0513 23:56:21.442840       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:03.704733    4316 command_runner.go:130] ! I0513 23:56:21.442854       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:03.704733    4316 command_runner.go:130] ! I0513 23:56:21.587997       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:03.704766    4316 command_runner.go:130] ! I0513 23:56:21.588249       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:03.704815    4316 command_runner.go:130] ! I0513 23:56:21.588322       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:03.704856    4316 command_runner.go:130] ! I0513 23:56:21.740205       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:03.704856    4316 command_runner.go:130] ! I0513 23:56:21.740392       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:03.704901    4316 command_runner.go:130] ! I0513 23:56:21.740547       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:03.704933    4316 command_runner.go:130] ! I0513 23:56:21.889738       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:03.704933    4316 command_runner.go:130] ! I0513 23:56:21.890053       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:03.704981    4316 command_runner.go:130] ! I0513 23:56:21.890145       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:03.705024    4316 command_runner.go:130] ! I0513 23:56:22.038114       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:03.705024    4316 command_runner.go:130] ! I0513 23:56:22.038197       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:03.705024    4316 command_runner.go:130] ! I0513 23:56:22.038216       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:03.705079    4316 command_runner.go:130] ! I0513 23:56:22.038314       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:03.705129    4316 command_runner.go:130] ! I0513 23:56:22.038329       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:03.705129    4316 command_runner.go:130] ! I0513 23:56:22.291303       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:03.705185    4316 command_runner.go:130] ! I0513 23:56:22.291332       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:03.705185    4316 command_runner.go:130] ! I0513 23:56:22.291999       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:03.705234    4316 command_runner.go:130] ! I0513 23:56:22.299124       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:03.705234    4316 command_runner.go:130] ! I0513 23:56:22.317101       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:03.705289    4316 command_runner.go:130] ! I0513 23:56:22.321553       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:03.705338    4316 command_runner.go:130] ! I0513 23:56:22.322540       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:03.705338    4316 command_runner.go:130] ! I0513 23:56:22.335837       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:03.705393    4316 command_runner.go:130] ! I0513 23:56:22.339493       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:03.705393    4316 command_runner.go:130] ! I0513 23:56:22.339494       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:03.705444    4316 command_runner.go:130] ! I0513 23:56:22.339605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:03.705444    4316 command_runner.go:130] ! I0513 23:56:22.340940       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:03.705499    4316 command_runner.go:130] ! I0513 23:56:22.341044       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:03.705499    4316 command_runner.go:130] ! I0513 23:56:22.342309       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:03.705549    4316 command_runner.go:130] ! I0513 23:56:22.343675       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:03.705605    4316 command_runner.go:130] ! I0513 23:56:22.343828       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:03.705655    4316 command_runner.go:130] ! I0513 23:56:22.347539       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:03.705655    4316 command_runner.go:130] ! I0513 23:56:22.357773       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:03.705655    4316 command_runner.go:130] ! I0513 23:56:22.361377       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:03.705711    4316 command_runner.go:130] ! I0513 23:56:22.372019       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:03.705762    4316 command_runner.go:130] ! I0513 23:56:22.380620       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:03.705762    4316 command_runner.go:130] ! I0513 23:56:22.382092       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:03.705817    4316 command_runner.go:130] ! I0513 23:56:22.382250       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:03.705817    4316 command_runner.go:130] ! I0513 23:56:22.382979       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:03.705865    4316 command_runner.go:130] ! I0513 23:56:22.384565       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:03.705865    4316 command_runner.go:130] ! I0513 23:56:22.384604       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:03.705920    4316 command_runner.go:130] ! I0513 23:56:22.384724       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:03.705920    4316 command_runner.go:130] ! I0513 23:56:22.386009       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:03.705969    4316 command_runner.go:130] ! I0513 23:56:22.386117       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:03.706027    4316 command_runner.go:130] ! I0513 23:56:22.386299       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:03.706027    4316 command_runner.go:130] ! I0513 23:56:22.389103       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:03.706027    4316 command_runner.go:130] ! I0513 23:56:22.390596       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:03.706076    4316 command_runner.go:130] ! I0513 23:56:22.391278       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:03.706131    4316 command_runner.go:130] ! I0513 23:56:22.391538       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:03.706131    4316 command_runner.go:130] ! I0513 23:56:22.391663       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:03.706180    4316 command_runner.go:130] ! I0513 23:56:22.392031       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:03.706237    4316 command_runner.go:130] ! I0513 23:56:22.392207       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:03.706237    4316 command_runner.go:130] ! I0513 23:56:22.392242       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:03.706237    4316 command_runner.go:130] ! I0513 23:56:22.392249       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:03.706299    4316 command_runner.go:130] ! I0513 23:56:22.402105       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:03.706299    4316 command_runner.go:130] ! I0513 23:56:22.405500       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:03.706356    4316 command_runner.go:130] ! I0513 23:56:22.406927       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:03.706356    4316 command_runner.go:130] ! I0513 23:56:22.411111       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100" podCIDRs=["10.244.0.0/24"]
	I0514 00:18:03.706356    4316 command_runner.go:130] ! I0513 23:56:22.431075       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:03.706455    4316 command_runner.go:130] ! I0513 23:56:22.443663       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:03.706455    4316 command_runner.go:130] ! I0513 23:56:22.552382       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:03.706455    4316 command_runner.go:130] ! I0513 23:56:22.573274       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:03.706530    4316 command_runner.go:130] ! I0513 23:56:22.573442       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:03.706563    4316 command_runner.go:130] ! I0513 23:56:22.573935       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:03.706606    4316 command_runner.go:130] ! I0513 23:56:22.574179       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:03.706645    4316 command_runner.go:130] ! I0513 23:56:22.586849       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:03.706698    4316 command_runner.go:130] ! I0513 23:56:22.602574       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:03.706742    4316 command_runner.go:130] ! I0513 23:56:23.018846       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:03.706793    4316 command_runner.go:130] ! I0513 23:56:23.087540       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:03.706831    4316 command_runner.go:130] ! I0513 23:56:23.087631       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:03.706831    4316 command_runner.go:130] ! I0513 23:56:23.691681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="593.37356ms"
	I0514 00:18:03.706887    4316 command_runner.go:130] ! I0513 23:56:23.736584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.765409ms"
	I0514 00:18:03.706931    4316 command_runner.go:130] ! I0513 23:56:23.736691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.105µs"
	I0514 00:18:03.706993    4316 command_runner.go:130] ! I0513 23:56:23.741069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.307µs"
	I0514 00:18:03.706993    4316 command_runner.go:130] ! I0513 23:56:24.558346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.410112ms"
	I0514 00:18:03.707059    4316 command_runner.go:130] ! I0513 23:56:24.599621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.388659ms"
	I0514 00:18:03.707109    4316 command_runner.go:130] ! I0513 23:56:24.599778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.705µs"
	I0514 00:18:03.707160    4316 command_runner.go:130] ! I0513 23:56:35.460855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.604µs"
	I0514 00:18:03.707188    4316 command_runner.go:130] ! I0513 23:56:35.495875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.404µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:56:36.868700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.505µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:56:36.916603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.935352ms"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:56:36.917123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.803µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:56:37.577172       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:02.230067       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:02.246355       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m02" podCIDRs=["10.244.1.0/24"]
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:02.603699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:22.527169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:45.791856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.887671ms"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:45.808219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.096894ms"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:45.808747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.005µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:45.809833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.705µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:45.811263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.604µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:48.526617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.926472ms"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:48.529326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.302µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:48.555529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.972453ms"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:48.556317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.601µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:03:17.563212       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:03:17.565297       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:03:17.579900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.2.0/24"]
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:03:17.665892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:03:38.035898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:10:17.797191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:12:39.070271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:12:44.527915       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707760    4316 command_runner.go:130] ! I0514 00:12:44.528275       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:03.707816    4316 command_runner.go:130] ! I0514 00:12:44.543895       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.3.0/24"]
	I0514 00:18:03.707876    4316 command_runner.go:130] ! I0514 00:12:49.983419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707876    4316 command_runner.go:130] ! I0514 00:14:17.920991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707922    4316 command_runner.go:130] ! I0514 00:14:33.013074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.740609ms"
	I0514 00:18:03.707922    4316 command_runner.go:130] ! I0514 00:14:33.013918       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.506µs"
	I0514 00:18:03.722425    4316 logs.go:123] Gathering logs for kindnet [b7d8d9a5e5ea] ...
	I0514 00:18:03.722425    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d8d9a5e5ea"
	I0514 00:18:03.745839    4316 command_runner.go:130] ! I0514 00:16:57.751233       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:03.746581    4316 command_runner.go:130] ! I0514 00:16:57.751585       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:03.746581    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:03.746581    4316 command_runner.go:130] ! I0514 00:16:57.752181       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:03.746581    4316 command_runner.go:130] ! I0514 00:16:57.752200       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:03.746581    4316 command_runner.go:130] ! I0514 00:16:57.752221       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:03.746581    4316 command_runner.go:130] ! I0514 00:17:01.123977       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! I0514 00:17:04.195495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! I0514 00:17:07.267636       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! I0514 00:17:10.339619       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! I0514 00:17:13.411801       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! goroutine 1 [running]:
	I0514 00:18:03.746657    4316 command_runner.go:130] ! main.main()
	I0514 00:18:03.746657    4316 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0514 00:18:03.748337    4316 logs.go:123] Gathering logs for kube-apiserver [da9e6534cd87] ...
	I0514 00:18:03.748414    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e6534cd87"
	I0514 00:18:03.769734    4316 command_runner.go:130] ! I0514 00:16:52.020111       1 options.go:221] external host was not specified, using 172.23.102.122
	I0514 00:18:03.769734    4316 command_runner.go:130] ! I0514 00:16:52.031119       1 server.go:148] Version: v1.30.0
	I0514 00:18:03.769734    4316 command_runner.go:130] ! I0514 00:16:52.031201       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.769734    4316 command_runner.go:130] ! I0514 00:16:52.560170       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0514 00:18:03.769734    4316 command_runner.go:130] ! I0514 00:16:52.562027       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0514 00:18:03.770816    4316 command_runner.go:130] ! I0514 00:16:52.567323       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0514 00:18:03.770816    4316 command_runner.go:130] ! I0514 00:16:52.562214       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:03.770816    4316 command_runner.go:130] ! I0514 00:16:52.570134       1 instance.go:299] Using reconciler: lease
	I0514 00:18:03.770816    4316 command_runner.go:130] ! I0514 00:16:53.544464       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0514 00:18:03.770816    4316 command_runner.go:130] ! W0514 00:16:53.544866       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.770912    4316 command_runner.go:130] ! I0514 00:16:53.780904       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0514 00:18:03.770912    4316 command_runner.go:130] ! I0514 00:16:53.781233       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0514 00:18:03.770912    4316 command_runner.go:130] ! I0514 00:16:54.015006       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0514 00:18:03.770912    4316 command_runner.go:130] ! I0514 00:16:54.172205       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.186014       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.186188       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.186609       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.187573       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.187695       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.188811       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.190200       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.190309       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.190366       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.192283       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.192583       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.193726       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.193833       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.193842       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.194656       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.194769       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.194831       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.195773       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.200522       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.200808       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.201073       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.202173       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0514 00:18:03.771668    4316 command_runner.go:130] ! W0514 00:16:54.202352       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771668    4316 command_runner.go:130] ! W0514 00:16:54.202465       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771668    4316 command_runner.go:130] ! I0514 00:16:54.204036       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0514 00:18:03.771668    4316 command_runner.go:130] ! W0514 00:16:54.204232       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0514 00:18:03.771668    4316 command_runner.go:130] ! I0514 00:16:54.213708       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0514 00:18:03.771668    4316 command_runner.go:130] ! W0514 00:16:54.213869       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771759    4316 command_runner.go:130] ! W0514 00:16:54.213992       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771759    4316 command_runner.go:130] ! I0514 00:16:54.214976       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0514 00:18:03.771759    4316 command_runner.go:130] ! W0514 00:16:54.215217       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771808    4316 command_runner.go:130] ! W0514 00:16:54.215317       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771808    4316 command_runner.go:130] ! I0514 00:16:54.226860       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0514 00:18:03.771808    4316 command_runner.go:130] ! W0514 00:16:54.227134       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771860    4316 command_runner.go:130] ! W0514 00:16:54.227258       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771860    4316 command_runner.go:130] ! I0514 00:16:54.230259       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0514 00:18:03.771907    4316 command_runner.go:130] ! I0514 00:16:54.232567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0514 00:18:03.771907    4316 command_runner.go:130] ! W0514 00:16:54.232734       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0514 00:18:03.771949    4316 command_runner.go:130] ! W0514 00:16:54.232824       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771949    4316 command_runner.go:130] ! I0514 00:16:54.239186       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0514 00:18:03.771949    4316 command_runner.go:130] ! W0514 00:16:54.239294       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0514 00:18:03.771993    4316 command_runner.go:130] ! W0514 00:16:54.239304       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0514 00:18:03.771993    4316 command_runner.go:130] ! I0514 00:16:54.241605       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0514 00:18:03.771993    4316 command_runner.go:130] ! W0514 00:16:54.241703       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.772208    4316 command_runner.go:130] ! W0514 00:16:54.241712       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.772208    4316 command_runner.go:130] ! I0514 00:16:54.242373       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0514 00:18:03.772208    4316 command_runner.go:130] ! W0514 00:16:54.242466       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.772208    4316 command_runner.go:130] ! I0514 00:16:54.259244       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0514 00:18:03.772208    4316 command_runner.go:130] ! W0514 00:16:54.259536       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.772326    4316 command_runner.go:130] ! I0514 00:16:54.792225       1 secure_serving.go:213] Serving securely on [::]:8443
	I0514 00:18:03.772326    4316 command_runner.go:130] ! I0514 00:16:54.792432       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:03.772326    4316 command_runner.go:130] ! I0514 00:16:54.794552       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0514 00:18:03.772392    4316 command_runner.go:130] ! I0514 00:16:54.794677       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.794720       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.795157       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.795787       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.795995       1 controller.go:116] Starting legacy_token_tracking_controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.796042       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.796156       1 controller.go:78] Starting OpenAPI AggregationController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.796272       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.797969       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.798688       1 available_controller.go:423] Starting AvailableConditionController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.798701       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.799424       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.799667       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.799692       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.800971       1 aggregator.go:163] waiting for initial CRD sync...
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.792447       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.792459       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.792473       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812587       1 controller.go:139] Starting OpenAPI controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812611       1 controller.go:87] Starting OpenAPI V3 controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812626       1 naming_controller.go:291] Starting NamingConditionController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812640       1 establishing_controller.go:76] Starting EstablishingController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812660       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812674       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812685       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.848957       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.849152       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.850275       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.850299       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.906495       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.938841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.950730       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.950897       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0514 00:18:03.772983    4316 command_runner.go:130] ! I0514 00:16:54.951294       1 aggregator.go:165] initial CRD sync complete...
	I0514 00:18:03.772983    4316 command_runner.go:130] ! I0514 00:16:54.951545       1 autoregister_controller.go:141] Starting autoregister controller
	I0514 00:18:03.772983    4316 command_runner.go:130] ! I0514 00:16:54.951793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0514 00:18:03.772983    4316 command_runner.go:130] ! I0514 00:16:54.951875       1 cache.go:39] Caches are synced for autoregister controller
	I0514 00:18:03.772983    4316 command_runner.go:130] ! I0514 00:16:54.962299       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0514 00:18:03.773056    4316 command_runner.go:130] ! I0514 00:16:54.968027       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:03.773056    4316 command_runner.go:130] ! I0514 00:16:54.968302       1 policy_source.go:224] refreshing policies
	I0514 00:18:03.773056    4316 command_runner.go:130] ! I0514 00:16:54.997391       1 shared_informer.go:320] Caches are synced for configmaps
	I0514 00:18:03.773115    4316 command_runner.go:130] ! I0514 00:16:54.999391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 00:18:03.773115    4316 command_runner.go:130] ! I0514 00:16:54.999732       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0514 00:18:03.773115    4316 command_runner.go:130] ! I0514 00:16:54.999871       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0514 00:18:03.773167    4316 command_runner.go:130] ! I0514 00:16:55.037244       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 00:18:03.773167    4316 command_runner.go:130] ! I0514 00:16:55.824524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0514 00:18:03.773167    4316 command_runner.go:130] ! W0514 00:16:56.521956       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122 172.23.106.39]
	I0514 00:18:03.773214    4316 command_runner.go:130] ! I0514 00:16:56.523614       1 controller.go:615] quota admission added evaluator for: endpoints
	I0514 00:18:03.773214    4316 command_runner.go:130] ! I0514 00:16:56.536716       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 00:18:03.773257    4316 command_runner.go:130] ! I0514 00:16:57.861026       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0514 00:18:03.773257    4316 command_runner.go:130] ! I0514 00:16:58.068043       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0514 00:18:03.773257    4316 command_runner.go:130] ! I0514 00:16:58.085925       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0514 00:18:03.773303    4316 command_runner.go:130] ! I0514 00:16:58.189328       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 00:18:03.773303    4316 command_runner.go:130] ! I0514 00:16:58.200849       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0514 00:18:03.773303    4316 command_runner.go:130] ! W0514 00:17:16.528300       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122]
	I0514 00:18:03.782570    4316 logs.go:123] Gathering logs for coredns [dcc5a109288b] ...
	I0514 00:18:03.782570    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc5a109288b"
	I0514 00:18:03.806231    4316 command_runner.go:130] > .:53
	I0514 00:18:03.806231    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:03.806231    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:03.806231    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:03.806231    4316 command_runner.go:130] > [INFO] 127.0.0.1:53257 - 27032 "HINFO IN 6976640239659908905.245956973392320689. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05278328s
	I0514 00:18:03.806493    4316 logs.go:123] Gathering logs for kube-proxy [91edaaa00da2] ...
	I0514 00:18:03.806493    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91edaaa00da2"
	I0514 00:18:03.829958    4316 command_runner.go:130] ! I0513 23:56:24.901713       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:03.829958    4316 command_runner.go:130] ! I0513 23:56:24.929714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.106.39"]
	I0514 00:18:03.830673    4316 command_runner.go:130] ! I0513 23:56:24.982680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:03.830722    4316 command_runner.go:130] ! I0513 23:56:24.982795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:03.830722    4316 command_runner.go:130] ! I0513 23:56:24.982816       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:03.830787    4316 command_runner.go:130] ! I0513 23:56:24.988669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:03.830835    4316 command_runner.go:130] ! I0513 23:56:24.989566       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:03.830864    4316 command_runner.go:130] ! I0513 23:56:24.989671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.830864    4316 command_runner.go:130] ! I0513 23:56:24.992700       1 config.go:192] "Starting service config controller"
	I0514 00:18:03.830864    4316 command_runner.go:130] ! I0513 23:56:24.993131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:03.830864    4316 command_runner.go:130] ! I0513 23:56:24.993327       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:03.830952    4316 command_runner.go:130] ! I0513 23:56:24.993339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:03.830990    4316 command_runner.go:130] ! I0513 23:56:24.994714       1 config.go:319] "Starting node config controller"
	I0514 00:18:03.830990    4316 command_runner.go:130] ! I0513 23:56:24.994744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:03.830990    4316 command_runner.go:130] ! I0513 23:56:25.094420       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:03.831082    4316 command_runner.go:130] ! I0513 23:56:25.094530       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:03.831082    4316 command_runner.go:130] ! I0513 23:56:25.094981       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:06.348723    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:18:06.370900    4316 command_runner.go:130] > 1838
	I0514 00:18:06.371311    4316 api_server.go:72] duration metric: took 1m6.6979187s to wait for apiserver process to appear ...
	I0514 00:18:06.371311    4316 api_server.go:88] waiting for apiserver healthz status ...
	I0514 00:18:06.377504    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0514 00:18:06.399039    4316 command_runner.go:130] > da9e6534cd87
	I0514 00:18:06.399039    4316 logs.go:276] 1 containers: [da9e6534cd87]
	I0514 00:18:06.409402    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0514 00:18:06.427536    4316 command_runner.go:130] > 08450c853590
	I0514 00:18:06.427536    4316 logs.go:276] 1 containers: [08450c853590]
	I0514 00:18:06.433810    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0514 00:18:06.454065    4316 command_runner.go:130] > dcc5a109288b
	I0514 00:18:06.454065    4316 command_runner.go:130] > 76c5ab7859ef
	I0514 00:18:06.454965    4316 logs.go:276] 2 containers: [dcc5a109288b 76c5ab7859ef]
	I0514 00:18:06.462871    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0514 00:18:06.481938    4316 command_runner.go:130] > d3581c1c570c
	I0514 00:18:06.482759    4316 command_runner.go:130] > 964887fc5d36
	I0514 00:18:06.482759    4316 logs.go:276] 2 containers: [d3581c1c570c 964887fc5d36]
	I0514 00:18:06.490925    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0514 00:18:06.513757    4316 command_runner.go:130] > b2a1b31cd7de
	I0514 00:18:06.513757    4316 command_runner.go:130] > 91edaaa00da2
	I0514 00:18:06.513757    4316 logs.go:276] 2 containers: [b2a1b31cd7de 91edaaa00da2]
	I0514 00:18:06.521144    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0514 00:18:06.544950    4316 command_runner.go:130] > b87239d1199a
	I0514 00:18:06.544950    4316 command_runner.go:130] > e96f94398d6d
	I0514 00:18:06.544950    4316 logs.go:276] 2 containers: [b87239d1199a e96f94398d6d]
	I0514 00:18:06.551406    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0514 00:18:06.571695    4316 command_runner.go:130] > 2b424a7cd98c
	I0514 00:18:06.572338    4316 command_runner.go:130] > b7d8d9a5e5ea
	I0514 00:18:06.572338    4316 logs.go:276] 2 containers: [2b424a7cd98c b7d8d9a5e5ea]
	I0514 00:18:06.572459    4316 logs.go:123] Gathering logs for kubelet ...
	I0514 00:18:06.572459    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507609    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507660    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.508230    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: E0514 00:16:46.508906    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229791    1441 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229941    1441 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.230764    1441 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: E0514 00:16:47.231303    1441 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717000    1520 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717452    1520 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717850    1520 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.719747    1520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.734764    1520 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754342    1520 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754443    1520 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755707    1520 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0514 00:18:06.606557    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755788    1520 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-101100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0514 00:18:06.606557    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756671    1520 topology_manager.go:138] "Creating topology manager with none policy"
	I0514 00:18:06.606607    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756747    1520 container_manager_linux.go:301] "Creating device plugin manager"
	I0514 00:18:06.606648    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.757344    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:06.606648    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.758885    1520 kubelet.go:400] "Attempting to sync node with API server"
	I0514 00:18:06.606684    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759591    1520 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0514 00:18:06.606684    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759727    1520 kubelet.go:312] "Adding apiserver pod source"
	I0514 00:18:06.606723    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.760630    1520 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0514 00:18:06.606759    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.765370    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.606798    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.765512    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.606833    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.767039    1520 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0514 00:18:06.606872    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.771297    1520 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0514 00:18:06.606907    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.771834    1520 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0514 00:18:06.606907    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.773545    1520 server.go:1264] "Started kubelet"
	I0514 00:18:06.606946    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.773829    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.606981    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.774013    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.607092    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.780360    1520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.102.122:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-101100.17cf32c62bf0274b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-101100,UID:multinode-101100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-101100,},FirstTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,LastTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-1
01100,}"
	I0514 00:18:06.607127    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.781297    1520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0514 00:18:06.607164    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.786484    1520 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0514 00:18:06.607164    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.787784    1520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0514 00:18:06.607164    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.792005    1520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0514 00:18:06.607164    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.800317    1520 server.go:455] "Adding debug handlers to kubelet server"
	I0514 00:18:06.607254    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805202    1520 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0514 00:18:06.607254    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805290    1520 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0514 00:18:06.607254    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812186    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="200ms"
	I0514 00:18:06.607341    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.812333    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.607372    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812369    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.607372    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816781    1520 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0514 00:18:06.607422    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816881    1520 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0514 00:18:06.607422    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816892    1520 factory.go:221] Registration of the systemd container factory successfully
	I0514 00:18:06.607422    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849206    1520 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0514 00:18:06.607483    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849426    1520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0514 00:18:06.607483    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849585    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:06.607483    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850764    1520 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0514 00:18:06.607483    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850799    1520 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0514 00:18:06.607483    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850826    1520 policy_none.go:49] "None policy: Start"
	I0514 00:18:06.607544    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.855604    1520 reconciler.go:26] "Reconciler: start to sync state"
	I0514 00:18:06.607544    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884024    1520 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0514 00:18:06.607544    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884165    1520 state_mem.go:35] "Initializing new in-memory state store"
	I0514 00:18:06.607544    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.886215    1520 state_mem.go:75] "Updated machine memory state"
	I0514 00:18:06.607544    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888657    1520 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0514 00:18:06.607615    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888839    1520 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0514 00:18:06.607615    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.891306    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0514 00:18:06.607646    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.897961    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0514 00:18:06.607646    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898040    1520 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0514 00:18:06.607646    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898088    1520 kubelet.go:2337] "Starting kubelet main sync loop"
	I0514 00:18:06.607646    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.898127    1520 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0514 00:18:06.609192    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898551    1520 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0514 00:18:06.609246    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.899218    1520 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-101100\" not found"
	I0514 00:18:06.609334    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.900215    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.609365    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.900324    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.609365    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.907443    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:06.609433    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.909152    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:06.609463    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.912132    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:06.609463    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:06.609511    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:06.609511    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:06.609573    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:06.609573    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999139    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7c140951f4f8270da243f55135e9f108f3cdf5ef11a4e990e06822ace5adbd"
	I0514 00:18:06.609658    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999762    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d7537422a83c9a57ab3bed978e87441e2725a75ebc91f5cad3319d11d4ea18"
	I0514 00:18:06.609686    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999846    1520 topology_manager.go:215] "Topology Admit Handler" podUID="378d61cf78af695f1df41e321907a84d" podNamespace="kube-system" podName="kube-apiserver-multinode-101100"
	I0514 00:18:06.609751    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.000880    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5393de2704b2efef461d22fa52aa93c8" podNamespace="kube-system" podName="kube-controller-manager-multinode-101100"
	I0514 00:18:06.609779    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.002201    1520 topology_manager.go:215] "Topology Admit Handler" podUID="8083abd658221f47cabf81a00c4ca98e" podNamespace="kube-system" podName="kube-scheduler-multinode-101100"
	I0514 00:18:06.609779    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.004707    1520 topology_manager.go:215] "Topology Admit Handler" podUID="62d8afc7714e8ab65bff9675d120bb67" podNamespace="kube-system" podName="etcd-multinode-101100"
	I0514 00:18:06.609821    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007687    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb3b27edcd2a44b67fad4a74f438a62eec78b20422f6f952396053574dfb97e"
	I0514 00:18:06.609821    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007796    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da9268fd6556bae4d0109c5065588160bcf737c35e1e5df738d31786425c22ff"
	I0514 00:18:06.609898    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007891    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd694480978f356b61313108a6ff716a8d5f6e854fea1e4aa89a76a68d049f0"
	I0514 00:18:06.609898    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007938    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287e744a4dc2e511f4e40696c7d3b4193896c0c40a5bb527e569d1d3ec2cb908"
	I0514 00:18:06.609898    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.013966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad0550a5dabf16106fc2956251a65bccdc32f3f3be1f27246f675964fd548a1f"
	I0514 00:18:06.609989    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.014759    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="400ms"
	I0514 00:18:06.609989    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.031437    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2"
	I0514 00:18:06.610049    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.048649    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697"
	I0514 00:18:06.610049    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074775    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-ca-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:06.610135    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074859    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:06.610179    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074906    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-k8s-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.610179    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074943    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.610239    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074981    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-certs\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:06.610298    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075015    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-data\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:06.610298    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075045    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-k8s-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:06.610383    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075248    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-ca-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.610413    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075285    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-flexvolume-dir\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.610456    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075316    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-kubeconfig\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.610527    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075345    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8083abd658221f47cabf81a00c4ca98e-kubeconfig\") pod \"kube-scheduler-multinode-101100\" (UID: \"8083abd658221f47cabf81a00c4ca98e\") " pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:06.610527    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.111262    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:06.610527    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.112979    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:06.610588    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.416229    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="800ms"
	I0514 00:18:06.610588    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.515338    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:06.610657    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.516940    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:06.610657    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: W0514 00:16:50.730920    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610733    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.730993    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610733    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.074200    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610794    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.074270    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610850    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.076835    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b"
	I0514 00:18:06.610850    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.081775    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610940    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.081938    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610973    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.108133    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3"
	I0514 00:18:06.610973    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.218458    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="1.6s"
	I0514 00:18:06.611042    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.318715    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:06.611079    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.319804    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:06.611079    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.367337    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.611116    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.367409    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.611116    4316 command_runner.go:130] > May 14 00:16:52 multinode-101100 kubelet[1520]: I0514 00:16:52.921237    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:06.611181    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086028    1520 kubelet_node_status.go:112] "Node was previously registered" node="multinode-101100"
	I0514 00:18:06.611181    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.086698    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-101100\" already exists" pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.611181    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086743    1520 kubelet_node_status.go:76] "Successfully registered node" node="multinode-101100"
	I0514 00:18:06.611237    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.088971    1520 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0514 00:18:06.611237    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.090614    1520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0514 00:18:06.611237    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.091996    1520 setters.go:580] "Node became not ready" node="multinode-101100" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-14T00:16:55Z","lastTransitionTime":"2024-05-14T00:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0514 00:18:06.611318    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.783435    1520 apiserver.go:52] "Watching apiserver"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788503    1520 topology_manager.go:215] "Topology Admit Handler" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4kmx4"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788795    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0" podNamespace="kube-system" podName="kindnet-9q2tv"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788932    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a9a488af-41ba-47f3-87b0-5a2f062afad6" podNamespace="kube-system" podName="kube-proxy-zhcz6"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789028    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247" podNamespace="kube-system" podName="storage-provisioner"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789184    1520 topology_manager.go:215] "Topology Admit Handler" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae" podNamespace="default" podName="busybox-fc5497c4f-xqj6w"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.789553    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789850    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-101100" podUID="1d9c79a4-1e4a-46fb-b3e8-02a4775f40af"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.790329    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-101100" podUID="cd31d030-75f8-4abb-bcad-34031cec7aa6"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.794088    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.798934    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-multinode-101100\" already exists" pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.809466    1520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.835196    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-101100"
	I0514 00:18:06.611924    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857783    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-cni-cfg\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:06.611967    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857845    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-xtables-lock\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:06.612026    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857866    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-xtables-lock\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:06.612088    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857954    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-lib-modules\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:06.612111    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858020    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a92f04b8-a93f-42d8-81d7-d4da6bf2e247-tmp\") pod \"storage-provisioner\" (UID: \"a92f04b8-a93f-42d8-81d7-d4da6bf2e247\") " pod="kube-system/storage-provisioner"
	I0514 00:18:06.612176    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858051    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-lib-modules\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:06.612176    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859176    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.612225    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859325    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.359260421 +0000 UTC m=+6.710289036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.612290    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.873841    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:06.612290    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.907826    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d9b35578220c9e99f77722d9aa294f" path="/var/lib/kubelet/pods/03d9b35578220c9e99f77722d9aa294f/volumes"
	I0514 00:18:06.612360    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.910490    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af4b764a5249ff25d3c1c709387c273" path="/var/lib/kubelet/pods/1af4b764a5249ff25d3c1c709387c273/volumes"
	I0514 00:18:06.612360    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917375    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612415    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917415    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612461    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917466    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.417450852 +0000 UTC m=+6.768479567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612512    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.964380    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-101100" podStartSLOduration=0.9643304 podStartE2EDuration="964.3304ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.964174289 +0000 UTC m=+6.315203004" watchObservedRunningTime="2024-05-14 00:16:55.9643304 +0000 UTC m=+6.315359015"
	I0514 00:18:06.612572    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.985118    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-101100" podStartSLOduration=0.985100539 podStartE2EDuration="985.100539ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.984806519 +0000 UTC m=+6.335835134" watchObservedRunningTime="2024-05-14 00:16:55.985100539 +0000 UTC m=+6.336129154"
	I0514 00:18:06.612624    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.362973    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.612684    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.363041    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.363025821 +0000 UTC m=+7.714054436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.612684    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463836    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612684    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463868    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612799    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463923    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.46390701 +0000 UTC m=+7.814935725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.377986    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.378101    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.378049439 +0000 UTC m=+9.729078054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478290    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478356    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478448    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.478431994 +0000 UTC m=+9.829460709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899119    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899678    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.394980    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.395173    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.39515828 +0000 UTC m=+13.746186895 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496260    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496313    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496438    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.496350091 +0000 UTC m=+13.847378806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.891391    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.901591    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.914896    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.898894    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.613349    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.899345    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.613349    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445887    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.613425    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445965    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.44595071 +0000 UTC m=+21.796979425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.613457    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547258    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.613457    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547292    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547346    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.547331033 +0000 UTC m=+21.898359648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.899515    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.900290    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:04 multinode-101100 kubelet[1520]: E0514 00:17:04.893282    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900260    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899212    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899658    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.895008    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899381    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899884    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508629    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508833    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.508813455 +0000 UTC m=+37.859842170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609334    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609455    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609579    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.609562102 +0000 UTC m=+37.960590817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.614047    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899431    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614087    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899749    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614131    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.898578    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614131    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.899676    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614184    4316 command_runner.go:130] > May 14 00:17:14 multinode-101100 kubelet[1520]: E0514 00:17:14.897029    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.614184    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.899665    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614259    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.900476    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614259    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.766386    1520 scope.go:117] "RemoveContainer" containerID="9c4eb727cedb65853cc3a94fdcc3e267ed41cd9cb15ef1cc1bb84f6f2278c9c4"
	I0514 00:18:06.614310    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.767364    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:06.614310    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.767901    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-9q2tv_kube-system(5b3ee167-f21f-46b3-bace-03a7233717e0)\"" pod="kube-system/kindnet-9q2tv" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.898891    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.899300    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.898102    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899045    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899315    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900488    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900677    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899091    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899625    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:24 multinode-101100 kubelet[1520]: E0514 00:17:24.899382    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900463    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900948    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550622    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550839    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.550821042 +0000 UTC m=+69.901849657 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651942    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651988    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.652038    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.652024653 +0000 UTC m=+70.003053368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.900302    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.901190    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.901408    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.913749    1520 scope.go:117] "RemoveContainer" containerID="e6ee22ee5c1b88cb0b1190c646094aefe229bfbd4486f007cde2b36da39ca886"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.914050    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.914651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a92f04b8-a93f-42d8-81d7-d4da6bf2e247)\"" pod="kube-system/storage-provisioner" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.898652    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.899154    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.900744    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.900407    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.902295    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.898560    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.899627    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:39 multinode-101100 kubelet[1520]: I0514 00:17:39.899892    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.888753    1520 scope.go:117] "RemoveContainer" containerID="eda79d47d28ffbc726bec7eaad072eeebb31ec439ed9bbe9fd544b9913b8f3ea"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: E0514 00:17:49.924547    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:06.615452    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:06.615452    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:06.615492    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:06.615492    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:06.615492    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.932695    1520 scope.go:117] "RemoveContainer" containerID="06f1a683cad8348fc4f8e339f226bbda12c4e8c1025c7acb52e2792253dd3008"
	I0514 00:18:06.615492    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.478966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d"
	I0514 00:18:06.615492    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.543407    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d"
	I0514 00:18:06.654604    4316 logs.go:123] Gathering logs for kube-scheduler [964887fc5d36] ...
	I0514 00:18:06.654604    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964887fc5d36"
	I0514 00:18:06.689073    4316 command_runner.go:130] ! I0513 23:56:04.693680       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:06.689470    4316 command_runner.go:130] ! W0513 23:56:06.133341       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:06.689572    4316 command_runner.go:130] ! W0513 23:56:06.133396       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.689642    4316 command_runner.go:130] ! W0513 23:56:06.133407       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:06.689642    4316 command_runner.go:130] ! W0513 23:56:06.133415       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:06.689710    4316 command_runner.go:130] ! I0513 23:56:06.170291       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:06.689710    4316 command_runner.go:130] ! I0513 23:56:06.170533       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.689763    4316 command_runner.go:130] ! I0513 23:56:06.174536       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:06.689797    4316 command_runner.go:130] ! I0513 23:56:06.174684       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:06.689797    4316 command_runner.go:130] ! I0513 23:56:06.174703       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:06.689797    4316 command_runner.go:130] ! I0513 23:56:06.174918       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:06.689868    4316 command_runner.go:130] ! W0513 23:56:06.182722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:06.689932    4316 command_runner.go:130] ! E0513 23:56:06.186053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:06.689990    4316 command_runner.go:130] ! W0513 23:56:06.183583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.690062    4316 command_runner.go:130] ! W0513 23:56:06.183698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:06.690062    4316 command_runner.go:130] ! W0513 23:56:06.183781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:06.690180    4316 command_runner.go:130] ! W0513 23:56:06.183835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:06.690239    4316 command_runner.go:130] ! W0513 23:56:06.183868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:06.690239    4316 command_runner.go:130] ! W0513 23:56:06.184039       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.690339    4316 command_runner.go:130] ! W0513 23:56:06.186929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.690396    4316 command_runner.go:130] ! W0513 23:56:06.186969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.690517    4316 command_runner.go:130] ! W0513 23:56:06.187026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:06.690588    4316 command_runner.go:130] ! E0513 23:56:06.188647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:06.690641    4316 command_runner.go:130] ! E0513 23:56:06.188112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.690703    4316 command_runner.go:130] ! E0513 23:56:06.188121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:06.690762    4316 command_runner.go:130] ! E0513 23:56:06.188233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:06.690835    4316 command_runner.go:130] ! E0513 23:56:06.188242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:06.690925    4316 command_runner.go:130] ! E0513 23:56:06.189252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:06.690969    4316 command_runner.go:130] ! E0513 23:56:06.189533       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.691063    4316 command_runner.go:130] ! E0513 23:56:06.189643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691104    4316 command_runner.go:130] ! E0513 23:56:06.189773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691191    4316 command_runner.go:130] ! W0513 23:56:06.190106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:06.691256    4316 command_runner.go:130] ! E0513 23:56:06.190324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:06.691320    4316 command_runner.go:130] ! W0513 23:56:06.190538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:06.691464    4316 command_runner.go:130] ! E0513 23:56:06.191036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:06.691464    4316 command_runner.go:130] ! W0513 23:56:06.191581       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:06.691517    4316 command_runner.go:130] ! E0513 23:56:06.192160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:06.691555    4316 command_runner.go:130] ! W0513 23:56:06.191626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691603    4316 command_runner.go:130] ! E0513 23:56:06.192721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691603    4316 command_runner.go:130] ! W0513 23:56:06.190821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:06.691643    4316 command_runner.go:130] ! E0513 23:56:06.193134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:06.691643    4316 command_runner.go:130] ! W0513 23:56:07.154218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:06.691703    4316 command_runner.go:130] ! E0513 23:56:07.155376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:06.691703    4316 command_runner.go:130] ! W0513 23:56:07.229548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:06.691760    4316 command_runner.go:130] ! E0513 23:56:07.229613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:06.691760    4316 command_runner.go:130] ! W0513 23:56:07.344429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691760    4316 command_runner.go:130] ! E0513 23:56:07.344853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691824    4316 command_runner.go:130] ! W0513 23:56:07.410556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:06.691883    4316 command_runner.go:130] ! E0513 23:56:07.410716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:06.691883    4316 command_runner.go:130] ! W0513 23:56:07.423084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:06.691960    4316 command_runner.go:130] ! E0513 23:56:07.423126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:06.691960    4316 command_runner.go:130] ! W0513 23:56:07.467897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:06.691998    4316 command_runner.go:130] ! E0513 23:56:07.467939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.484903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.485019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.545758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.546087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.573884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.573980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.633780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.633901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.680821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.680938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.704130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.704357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.736914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.737079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.754367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.754798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:06.692560    4316 command_runner.go:130] ! I0513 23:56:09.676327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:06.692560    4316 command_runner.go:130] ! E0514 00:14:35.689344       1 run.go:74] "command failed" err="finished without leader elect"
	I0514 00:18:06.700984    4316 logs.go:123] Gathering logs for kube-controller-manager [e96f94398d6d] ...
	I0514 00:18:06.700984    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e96f94398d6d"
	I0514 00:18:06.722879    4316 command_runner.go:130] ! I0513 23:56:04.448604       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:06.722879    4316 command_runner.go:130] ! I0513 23:56:04.932336       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:06.722879    4316 command_runner.go:130] ! I0513 23:56:04.932378       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.723442    4316 command_runner.go:130] ! I0513 23:56:04.934044       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:04.934133       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:04.934796       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:04.935005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.124957       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.125092       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.140996       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.141447       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.141567       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.156847       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:06.723633    4316 command_runner.go:130] ! I0513 23:56:09.157241       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:06.723633    4316 command_runner.go:130] ! I0513 23:56:09.157455       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:06.723633    4316 command_runner.go:130] ! I0513 23:56:09.170795       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:06.723633    4316 command_runner.go:130] ! I0513 23:56:09.171005       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:06.723719    4316 command_runner.go:130] ! I0513 23:56:09.171684       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:06.726821    4316 command_runner.go:130] ! I0513 23:56:09.171921       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:06.726883    4316 command_runner.go:130] ! I0513 23:56:09.172144       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:06.726883    4316 command_runner.go:130] ! I0513 23:56:09.183975       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:06.726883    4316 command_runner.go:130] ! I0513 23:56:09.184362       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:06.726883    4316 command_runner.go:130] ! I0513 23:56:09.185233       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:06.726883    4316 command_runner.go:130] ! I0513 23:56:09.230173       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:06.726940    4316 command_runner.go:130] ! I0513 23:56:09.242679       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:06.726940    4316 command_runner.go:130] ! I0513 23:56:09.242735       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:06.726940    4316 command_runner.go:130] ! I0513 23:56:09.242821       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:06.726940    4316 command_runner.go:130] ! I0513 23:56:09.249513       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:06.727001    4316 command_runner.go:130] ! I0513 23:56:09.249614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:06.727001    4316 command_runner.go:130] ! I0513 23:56:09.249731       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:06.727001    4316 command_runner.go:130] ! I0513 23:56:09.249824       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:06.727066    4316 command_runner.go:130] ! I0513 23:56:09.249912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:06.727121    4316 command_runner.go:130] ! I0513 23:56:09.250132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:06.727121    4316 command_runner.go:130] ! I0513 23:56:09.250216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:06.727121    4316 command_runner.go:130] ! I0513 23:56:09.250270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:06.727177    4316 command_runner.go:130] ! I0513 23:56:09.250425       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:06.727177    4316 command_runner.go:130] ! I0513 23:56:09.250604       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:06.727177    4316 command_runner.go:130] ! I0513 23:56:09.250656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:06.727273    4316 command_runner.go:130] ! I0513 23:56:09.250695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:06.727273    4316 command_runner.go:130] ! I0513 23:56:09.250745       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:06.727273    4316 command_runner.go:130] ! I0513 23:56:09.250794       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:06.727273    4316 command_runner.go:130] ! I0513 23:56:09.250851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:06.727340    4316 command_runner.go:130] ! I0513 23:56:09.250883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:06.727340    4316 command_runner.go:130] ! I0513 23:56:09.250994       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:06.727340    4316 command_runner.go:130] ! I0513 23:56:09.251028       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:06.727340    4316 command_runner.go:130] ! I0513 23:56:09.251909       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:06.727340    4316 command_runner.go:130] ! I0513 23:56:09.251999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:06.727402    4316 command_runner.go:130] ! I0513 23:56:09.252142       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:06.727402    4316 command_runner.go:130] ! I0513 23:56:09.305089       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:06.727402    4316 command_runner.go:130] ! I0513 23:56:09.305302       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:06.727402    4316 command_runner.go:130] ! I0513 23:56:09.305357       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:06.727467    4316 command_runner.go:130] ! I0513 23:56:09.305376       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:06.727467    4316 command_runner.go:130] ! I0513 23:56:09.321907       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:06.727467    4316 command_runner.go:130] ! I0513 23:56:09.322244       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:06.727467    4316 command_runner.go:130] ! I0513 23:56:09.322270       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:06.727467    4316 command_runner.go:130] ! I0513 23:56:09.324160       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:06.727528    4316 command_runner.go:130] ! I0513 23:56:09.324208       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:06.727528    4316 command_runner.go:130] ! E0513 23:56:09.334850       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:06.727528    4316 command_runner.go:130] ! I0513 23:56:09.335135       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:06.727593    4316 command_runner.go:130] ! I0513 23:56:09.346530       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:06.727593    4316 command_runner.go:130] ! I0513 23:56:09.346809       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:06.727593    4316 command_runner.go:130] ! I0513 23:56:09.346883       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.385297       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.385391       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.385403       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.542113       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.542271       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.542284       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.581300       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.581321       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.581454       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.581971       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.582008       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:06.731204    4316 command_runner.go:130] ! I0513 23:56:09.582030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:06.731204    4316 command_runner.go:130] ! I0513 23:56:09.582896       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:06.731204    4316 command_runner.go:130] ! I0513 23:56:09.582908       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:06.731277    4316 command_runner.go:130] ! I0513 23:56:09.582922       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:06.731277    4316 command_runner.go:130] ! I0513 23:56:09.583436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:06.731277    4316 command_runner.go:130] ! I0513 23:56:09.583678       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:06.731277    4316 command_runner.go:130] ! I0513 23:56:09.583691       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:06.731339    4316 command_runner.go:130] ! I0513 23:56:09.583727       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:06.731339    4316 command_runner.go:130] ! I0513 23:56:09.734073       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:09.734159       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:09.734446       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:09.885354       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:09.885756       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:09.885934       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:10.040288       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:10.040486       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.090311       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.090418       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.090428       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.090911       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.091093       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.101598       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.101778       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.101805       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.114509       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.114580       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.114849       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.114678       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.115038       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.115048       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:06.733652    4316 command_runner.go:130] ! E0513 23:56:20.117646       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.117865       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.130498       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.130711       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.130932       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.143035       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.143414       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.143607       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.160023       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.160191       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.160215       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.170613       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.170951       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.171064       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.179840       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.180447       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.180590       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.190977       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.191286       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.191448       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.204888       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.205578       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.205670       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.239034       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.239193       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.239262       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.482568       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.486046       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.486073       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.486093       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.786163       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.786358       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.082938       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.083657       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.083743       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.238006       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.238099       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.238152       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.238163       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.283674       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.283751       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.283986       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.284217       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.442664       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.442840       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.442854       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.587997       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.588249       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.588322       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.740205       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.740392       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.740547       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.889738       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.890053       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.890145       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.038114       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.038197       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.038216       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.038314       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.038329       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.291303       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.291332       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.291999       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.299124       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.317101       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.321553       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.322540       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.335837       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.339493       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.339494       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.339605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.340940       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.341044       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.342309       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:06.734937    4316 command_runner.go:130] ! I0513 23:56:22.343675       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:06.734937    4316 command_runner.go:130] ! I0513 23:56:22.343828       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:06.734937    4316 command_runner.go:130] ! I0513 23:56:22.347539       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:06.734991    4316 command_runner.go:130] ! I0513 23:56:22.357773       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:06.734991    4316 command_runner.go:130] ! I0513 23:56:22.361377       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:06.734991    4316 command_runner.go:130] ! I0513 23:56:22.372019       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:06.735029    4316 command_runner.go:130] ! I0513 23:56:22.380620       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:06.735029    4316 command_runner.go:130] ! I0513 23:56:22.382092       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:06.735066    4316 command_runner.go:130] ! I0513 23:56:22.382250       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:06.735066    4316 command_runner.go:130] ! I0513 23:56:22.382979       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.384565       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.384604       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.384724       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.386009       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.386117       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.386299       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.389103       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.390596       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:06.735228    4316 command_runner.go:130] ! I0513 23:56:22.391278       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:06.735228    4316 command_runner.go:130] ! I0513 23:56:22.391538       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:06.735228    4316 command_runner.go:130] ! I0513 23:56:22.391663       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:06.735228    4316 command_runner.go:130] ! I0513 23:56:22.392031       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:06.735228    4316 command_runner.go:130] ! I0513 23:56:22.392207       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:06.735285    4316 command_runner.go:130] ! I0513 23:56:22.392242       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:06.735285    4316 command_runner.go:130] ! I0513 23:56:22.392249       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:06.735285    4316 command_runner.go:130] ! I0513 23:56:22.402105       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:06.735285    4316 command_runner.go:130] ! I0513 23:56:22.405500       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:06.735338    4316 command_runner.go:130] ! I0513 23:56:22.406927       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:06.735338    4316 command_runner.go:130] ! I0513 23:56:22.411111       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100" podCIDRs=["10.244.0.0/24"]
	I0514 00:18:06.735338    4316 command_runner.go:130] ! I0513 23:56:22.431075       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:06.735338    4316 command_runner.go:130] ! I0513 23:56:22.443663       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:06.735398    4316 command_runner.go:130] ! I0513 23:56:22.552382       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:06.735398    4316 command_runner.go:130] ! I0513 23:56:22.573274       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:06.735434    4316 command_runner.go:130] ! I0513 23:56:22.573442       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:06.735434    4316 command_runner.go:130] ! I0513 23:56:22.573935       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:06.735469    4316 command_runner.go:130] ! I0513 23:56:22.574179       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:06.735524    4316 command_runner.go:130] ! I0513 23:56:22.586849       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:06.735524    4316 command_runner.go:130] ! I0513 23:56:22.602574       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:06.735524    4316 command_runner.go:130] ! I0513 23:56:23.018846       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:06.735524    4316 command_runner.go:130] ! I0513 23:56:23.087540       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:06.735572    4316 command_runner.go:130] ! I0513 23:56:23.087631       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:06.735572    4316 command_runner.go:130] ! I0513 23:56:23.691681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="593.37356ms"
	I0514 00:18:06.735572    4316 command_runner.go:130] ! I0513 23:56:23.736584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.765409ms"
	I0514 00:18:06.735630    4316 command_runner.go:130] ! I0513 23:56:23.736691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.105µs"
	I0514 00:18:06.735630    4316 command_runner.go:130] ! I0513 23:56:23.741069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.307µs"
	I0514 00:18:06.735682    4316 command_runner.go:130] ! I0513 23:56:24.558346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.410112ms"
	I0514 00:18:06.735682    4316 command_runner.go:130] ! I0513 23:56:24.599621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.388659ms"
	I0514 00:18:06.735682    4316 command_runner.go:130] ! I0513 23:56:24.599778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.705µs"
	I0514 00:18:06.735742    4316 command_runner.go:130] ! I0513 23:56:35.460855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.604µs"
	I0514 00:18:06.735742    4316 command_runner.go:130] ! I0513 23:56:35.495875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.404µs"
	I0514 00:18:06.735793    4316 command_runner.go:130] ! I0513 23:56:36.868700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.505µs"
	I0514 00:18:06.735793    4316 command_runner.go:130] ! I0513 23:56:36.916603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.935352ms"
	I0514 00:18:06.735793    4316 command_runner.go:130] ! I0513 23:56:36.917123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.803µs"
	I0514 00:18:06.735846    4316 command_runner.go:130] ! I0513 23:56:37.577172       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:06.735846    4316 command_runner.go:130] ! I0513 23:59:02.230067       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:06.735896    4316 command_runner.go:130] ! I0513 23:59:02.246355       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m02" podCIDRs=["10.244.1.0/24"]
	I0514 00:18:06.735896    4316 command_runner.go:130] ! I0513 23:59:02.603699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:06.735896    4316 command_runner.go:130] ! I0513 23:59:22.527169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.735953    4316 command_runner.go:130] ! I0513 23:59:45.791856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.887671ms"
	I0514 00:18:06.735953    4316 command_runner.go:130] ! I0513 23:59:45.808219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.096894ms"
	I0514 00:18:06.736003    4316 command_runner.go:130] ! I0513 23:59:45.808747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.005µs"
	I0514 00:18:06.736003    4316 command_runner.go:130] ! I0513 23:59:45.809833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.705µs"
	I0514 00:18:06.736003    4316 command_runner.go:130] ! I0513 23:59:45.811263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.604µs"
	I0514 00:18:06.736059    4316 command_runner.go:130] ! I0513 23:59:48.526617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.926472ms"
	I0514 00:18:06.736059    4316 command_runner.go:130] ! I0513 23:59:48.529326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.302µs"
	I0514 00:18:06.736059    4316 command_runner.go:130] ! I0513 23:59:48.555529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.972453ms"
	I0514 00:18:06.736111    4316 command_runner.go:130] ! I0513 23:59:48.556317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.601µs"
	I0514 00:18:06.736111    4316 command_runner.go:130] ! I0514 00:03:17.563212       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736165    4316 command_runner.go:130] ! I0514 00:03:17.565297       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:06.736165    4316 command_runner.go:130] ! I0514 00:03:17.579900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.2.0/24"]
	I0514 00:18:06.736216    4316 command_runner.go:130] ! I0514 00:03:17.665892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:06.736216    4316 command_runner.go:130] ! I0514 00:03:38.035898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736322    4316 command_runner.go:130] ! I0514 00:10:17.797191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736378    4316 command_runner.go:130] ! I0514 00:12:39.070271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736378    4316 command_runner.go:130] ! I0514 00:12:44.527915       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736426    4316 command_runner.go:130] ! I0514 00:12:44.528275       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:06.736426    4316 command_runner.go:130] ! I0514 00:12:44.543895       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.3.0/24"]
	I0514 00:18:06.736426    4316 command_runner.go:130] ! I0514 00:12:49.983419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736481    4316 command_runner.go:130] ! I0514 00:14:17.920991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736481    4316 command_runner.go:130] ! I0514 00:14:33.013074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.740609ms"
	I0514 00:18:06.736481    4316 command_runner.go:130] ! I0514 00:14:33.013918       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.506µs"
	I0514 00:18:06.752395    4316 logs.go:123] Gathering logs for coredns [76c5ab7859ef] ...
	I0514 00:18:06.752395    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5ab7859ef"
	I0514 00:18:06.775995    4316 command_runner.go:130] > .:53
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:06.776994    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:06.776994    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 127.0.0.1:57161 - 45698 "HINFO IN 8990392176501838712.5889638972791529478. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051692136s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:55099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211505s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:55878 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185519855s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:33619 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15684109s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:49440 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.197645067s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:50960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430608s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:46839 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000167103s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:55330 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000155803s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:50874 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000131802s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:53724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096802s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.042707366s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:54429 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000269706s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:48558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000262605s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:46986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023487677s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:60460 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174903s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:60672 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204304s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:36311 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110402s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:43910 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301006s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:52495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000145803s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:46357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066702s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:41390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062301s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:35739 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084301s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:44800 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163303s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:57631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068702s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:50842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135702s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:41210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204604s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:57858 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073801s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:48782 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152303s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:36081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121002s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:46909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115002s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:36030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220205s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:56187 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059401s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:51500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099802s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:57247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147903s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:46132 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170203s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:57206 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000452309s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:44795 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000146203s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:33385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082102s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:56742 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173704s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:46927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185904s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:42956 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000054801s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0514 00:18:06.780954    4316 logs.go:123] Gathering logs for kube-scheduler [d3581c1c570c] ...
	I0514 00:18:06.781477    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3581c1c570c"
	I0514 00:18:06.802398    4316 command_runner.go:130] ! I0514 00:16:52.716401       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:06.802398    4316 command_runner.go:130] ! W0514 00:16:54.858727       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:06.803479    4316 command_runner.go:130] ! W0514 00:16:54.858778       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.803611    4316 command_runner.go:130] ! W0514 00:16:54.858790       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:06.803611    4316 command_runner.go:130] ! W0514 00:16:54.858800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:06.803679    4316 command_runner.go:130] ! I0514 00:16:54.945438       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:06.803741    4316 command_runner.go:130] ! I0514 00:16:54.945867       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.803787    4316 command_runner.go:130] ! I0514 00:16:54.953986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:06.803787    4316 command_runner.go:130] ! I0514 00:16:54.957180       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:06.803787    4316 command_runner.go:130] ! I0514 00:16:54.957284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:06.803867    4316 command_runner.go:130] ! I0514 00:16:54.957493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:06.803895    4316 command_runner.go:130] ! I0514 00:16:55.058381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:06.807563    4316 logs.go:123] Gathering logs for kube-proxy [b2a1b31cd7de] ...
	I0514 00:18:06.807626    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a1b31cd7de"
	I0514 00:18:06.831122    4316 command_runner.go:130] ! I0514 00:16:57.528613       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:06.831122    4316 command_runner.go:130] ! I0514 00:16:57.562847       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.122"]
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.701301       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.701447       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.701476       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.708219       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.708800       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.708841       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.712422       1 config.go:192] "Starting service config controller"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.712733       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.712780       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.712824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.715339       1 config.go:319] "Starting node config controller"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.717651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.815732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.815811       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.818050       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:06.832666    4316 logs.go:123] Gathering logs for kindnet [b7d8d9a5e5ea] ...
	I0514 00:18:06.832699    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d8d9a5e5ea"
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:16:57.751233       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:16:57.751585       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:06.854234    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:16:57.752181       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:16:57.752200       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:16:57.752221       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:17:01.123977       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:17:04.195495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:17:07.267636       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:17:10.339619       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.855220    4316 command_runner.go:130] ! I0514 00:17:13.411801       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.855220    4316 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.855220    4316 command_runner.go:130] ! goroutine 1 [running]:
	I0514 00:18:06.855220    4316 command_runner.go:130] ! main.main()
	I0514 00:18:06.855220    4316 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0514 00:18:06.861781    4316 logs.go:123] Gathering logs for dmesg ...
	I0514 00:18:06.862563    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0514 00:18:06.883290    4316 command_runner.go:130] > [May14 00:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.104207] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.023601] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.058832] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.024495] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0514 00:18:06.884306    4316 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +5.692465] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.707713] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +1.789899] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +7.282690] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0514 00:18:06.884306    4316 command_runner.go:130] > [May14 00:16] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.158382] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [ +23.750429] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.111929] kauditd_printk_skb: 73 callbacks suppressed
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.464883] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.164872] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.194348] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +2.832176] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.181315] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.160798] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.238904] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.787359] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.085936] kauditd_printk_skb: 205 callbacks suppressed
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +3.384697] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +1.802132] kauditd_printk_skb: 64 callbacks suppressed
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +5.213940] kauditd_printk_skb: 10 callbacks suppressed
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +3.471694] systemd-fstab-generator[2315]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [May14 00:17] kauditd_printk_skb: 70 callbacks suppressed
	I0514 00:18:06.886287    4316 logs.go:123] Gathering logs for etcd [08450c853590] ...
	I0514 00:18:06.886287    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08450c853590"
	I0514 00:18:06.911840    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.687231Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:06.912015    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.691397Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.102.122:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.102.122:2380","--initial-cluster=multinode-101100=https://172.23.102.122:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.102.122:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.102.122:2380","--name=multinode-101100","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0514 00:18:06.912090    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.692425Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0514 00:18:06.912158    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.693634Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:06.912158    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.693771Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.102.122:2380"]}
	I0514 00:18:06.912225    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.694117Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:06.912314    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.703219Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"]}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.704312Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-101100","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.7264Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.905879ms"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.748539Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.766395Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","commit-index":1898}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=()"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became follower at term 2"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.768086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6e4c15c3d0f3380f [peers: [], term: 2, commit: 1898, applied: 0, lastindex: 1898, lastterm: 2]"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.782157Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.786938Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1096}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.797876Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1653}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.80426Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.81216Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6e4c15c3d0f3380f","timeout":"7s"}
	I0514 00:18:06.913013    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.813213Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6e4c15c3d0f3380f"}
	I0514 00:18:06.913054    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.814234Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"6e4c15c3d0f3380f","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.815302Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816695Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816877Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=(7947751373170489359)"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817687Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","added-peer-id":"6e4c15c3d0f3380f","added-peer-peer-urls":["https://172.23.106.39:2380"]}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","cluster-version":"3.5"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.818648Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.83299Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.834951Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6e4c15c3d0f3380f","initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835138Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0514 00:18:06.913599    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835469Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.102.122:2380"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835603Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.102.122:2380"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.468953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f is starting a new election at term 2"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became pre-candidate at term 2"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgPreVoteResp from 6e4c15c3d0f3380f at term 2"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became candidate at term 3"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgVoteResp from 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became leader at term 3"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6e4c15c3d0f3380f elected leader 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479025Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6e4c15c3d0f3380f","local-member-attributes":"{Name:multinode-101100 ClientURLs:[https://172.23.102.122:2379]}","request-path":"/0/members/6e4c15c3d0f3380f/attributes","cluster-id":"bb849d1df0b559d7","publish-timeout":"7s"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.102.122:2379"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0514 00:18:06.919879    4316 logs.go:123] Gathering logs for coredns [dcc5a109288b] ...
	I0514 00:18:06.919879    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc5a109288b"
	I0514 00:18:06.946346    4316 command_runner.go:130] > .:53
	I0514 00:18:06.946346    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:06.946346    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:06.946346    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:06.947333    4316 command_runner.go:130] > [INFO] 127.0.0.1:53257 - 27032 "HINFO IN 6976640239659908905.245956973392320689. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05278328s
	I0514 00:18:06.947333    4316 logs.go:123] Gathering logs for container status ...
	I0514 00:18:06.947333    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0514 00:18:07.001003    4316 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0514 00:18:07.001003    4316 command_runner.go:130] > 3d0b2f0362eb4       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   8cb9b6d6d0915       busybox-fc5497c4f-xqj6w
	I0514 00:18:07.001003    4316 command_runner.go:130] > dcc5a109288b6       cbb01a7bd410d                                                                                         7 seconds ago        Running             coredns                   1                   1cccb5e8cee3b       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:07.001141    4316 command_runner.go:130] > bde84ba2d4ed7       6e38f40d628db                                                                                         28 seconds ago       Running             storage-provisioner       2                   468a0e2976ae4       storage-provisioner
	I0514 00:18:07.001191    4316 command_runner.go:130] > 2b424a7cd98c8       4950bb10b3f87                                                                                         40 seconds ago       Running             kindnet-cni               2                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:07.001261    4316 command_runner.go:130] > b7d8d9a5e5eaf       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:07.001261    4316 command_runner.go:130] > b142687b621f1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   468a0e2976ae4       storage-provisioner
	I0514 00:18:07.001361    4316 command_runner.go:130] > b2a1b31cd7dee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   a8ac60a565998       kube-proxy-zhcz6
	I0514 00:18:07.001409    4316 command_runner.go:130] > 08450c853590d       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   419648c0d4053       etcd-multinode-101100
	I0514 00:18:07.001472    4316 command_runner.go:130] > da9e6534cd87d       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   509b8407e0955       kube-apiserver-multinode-101100
	I0514 00:18:07.001472    4316 command_runner.go:130] > d3581c1c570cf       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   ddcaadef980ac       kube-scheduler-multinode-101100
	I0514 00:18:07.001598    4316 command_runner.go:130] > b87239d1199ab       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   659643d47b9ae       kube-controller-manager-multinode-101100
	I0514 00:18:07.001669    4316 command_runner.go:130] > 57dea5416eb67       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago       Exited              busybox                   0                   76d1b8ce19aba       busybox-fc5497c4f-xqj6w
	I0514 00:18:07.001736    4316 command_runner.go:130] > 76c5ab7859eff       cbb01a7bd410d                                                                                         21 minutes ago       Exited              coredns                   0                   8bb49b28c842a       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:07.001736    4316 command_runner.go:130] > 91edaaa00da23       a0bf559e280cf                                                                                         21 minutes ago       Exited              kube-proxy                0                   9bd694480978f       kube-proxy-zhcz6
	I0514 00:18:07.001803    4316 command_runner.go:130] > e96f94398d6dd       c7aad43836fa5                                                                                         22 minutes ago       Exited              kube-controller-manager   0                   da9268fd6556b       kube-controller-manager-multinode-101100
	I0514 00:18:07.001908    4316 command_runner.go:130] > 964887fc5d362       259c8277fcbbc                                                                                         22 minutes ago       Exited              kube-scheduler            0                   fcb3b27edcd2a       kube-scheduler-multinode-101100
	I0514 00:18:07.005539    4316 logs.go:123] Gathering logs for kube-apiserver [da9e6534cd87] ...
	I0514 00:18:07.005539    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e6534cd87"
	I0514 00:18:07.029007    4316 command_runner.go:130] ! I0514 00:16:52.020111       1 options.go:221] external host was not specified, using 172.23.102.122
	I0514 00:18:07.037607    4316 command_runner.go:130] ! I0514 00:16:52.031119       1 server.go:148] Version: v1.30.0
	I0514 00:18:07.037607    4316 command_runner.go:130] ! I0514 00:16:52.031201       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:07.037607    4316 command_runner.go:130] ! I0514 00:16:52.560170       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0514 00:18:07.037734    4316 command_runner.go:130] ! I0514 00:16:52.562027       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0514 00:18:07.037734    4316 command_runner.go:130] ! I0514 00:16:52.567323       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0514 00:18:07.037986    4316 command_runner.go:130] ! I0514 00:16:52.562214       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:07.038048    4316 command_runner.go:130] ! I0514 00:16:52.570134       1 instance.go:299] Using reconciler: lease
	I0514 00:18:07.038048    4316 command_runner.go:130] ! I0514 00:16:53.544464       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0514 00:18:07.038048    4316 command_runner.go:130] ! W0514 00:16:53.544866       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038114    4316 command_runner.go:130] ! I0514 00:16:53.780904       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0514 00:18:07.038114    4316 command_runner.go:130] ! I0514 00:16:53.781233       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0514 00:18:07.038114    4316 command_runner.go:130] ! I0514 00:16:54.015006       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0514 00:18:07.038114    4316 command_runner.go:130] ! I0514 00:16:54.172205       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0514 00:18:07.038185    4316 command_runner.go:130] ! I0514 00:16:54.186014       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0514 00:18:07.038185    4316 command_runner.go:130] ! W0514 00:16:54.186188       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038185    4316 command_runner.go:130] ! W0514 00:16:54.186609       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038252    4316 command_runner.go:130] ! I0514 00:16:54.187573       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0514 00:18:07.038252    4316 command_runner.go:130] ! W0514 00:16:54.187695       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038252    4316 command_runner.go:130] ! I0514 00:16:54.188811       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0514 00:18:07.038322    4316 command_runner.go:130] ! I0514 00:16:54.190200       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0514 00:18:07.038322    4316 command_runner.go:130] ! W0514 00:16:54.190309       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0514 00:18:07.038322    4316 command_runner.go:130] ! W0514 00:16:54.190366       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0514 00:18:07.038322    4316 command_runner.go:130] ! I0514 00:16:54.192283       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0514 00:18:07.038389    4316 command_runner.go:130] ! W0514 00:16:54.192583       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0514 00:18:07.038389    4316 command_runner.go:130] ! I0514 00:16:54.193726       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0514 00:18:07.038389    4316 command_runner.go:130] ! W0514 00:16:54.193833       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038450    4316 command_runner.go:130] ! W0514 00:16:54.193842       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038450    4316 command_runner.go:130] ! I0514 00:16:54.194656       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0514 00:18:07.038450    4316 command_runner.go:130] ! W0514 00:16:54.194769       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038516    4316 command_runner.go:130] ! W0514 00:16:54.194831       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038516    4316 command_runner.go:130] ! I0514 00:16:54.195773       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0514 00:18:07.038516    4316 command_runner.go:130] ! I0514 00:16:54.200522       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0514 00:18:07.038585    4316 command_runner.go:130] ! W0514 00:16:54.200808       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038585    4316 command_runner.go:130] ! W0514 00:16:54.201073       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038585    4316 command_runner.go:130] ! I0514 00:16:54.202173       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0514 00:18:07.038649    4316 command_runner.go:130] ! W0514 00:16:54.202352       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038649    4316 command_runner.go:130] ! W0514 00:16:54.202465       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038649    4316 command_runner.go:130] ! I0514 00:16:54.204036       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0514 00:18:07.038719    4316 command_runner.go:130] ! W0514 00:16:54.204232       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0514 00:18:07.038719    4316 command_runner.go:130] ! I0514 00:16:54.213708       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0514 00:18:07.038719    4316 command_runner.go:130] ! W0514 00:16:54.213869       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038784    4316 command_runner.go:130] ! W0514 00:16:54.213992       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038784    4316 command_runner.go:130] ! I0514 00:16:54.214976       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0514 00:18:07.038784    4316 command_runner.go:130] ! W0514 00:16:54.215217       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038784    4316 command_runner.go:130] ! W0514 00:16:54.215317       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038852    4316 command_runner.go:130] ! I0514 00:16:54.226860       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0514 00:18:07.038852    4316 command_runner.go:130] ! W0514 00:16:54.227134       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038852    4316 command_runner.go:130] ! W0514 00:16:54.227258       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038917    4316 command_runner.go:130] ! I0514 00:16:54.230259       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0514 00:18:07.038917    4316 command_runner.go:130] ! I0514 00:16:54.232567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0514 00:18:07.038917    4316 command_runner.go:130] ! W0514 00:16:54.232734       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0514 00:18:07.038917    4316 command_runner.go:130] ! W0514 00:16:54.232824       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038986    4316 command_runner.go:130] ! I0514 00:16:54.239186       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0514 00:18:07.038986    4316 command_runner.go:130] ! W0514 00:16:54.239294       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0514 00:18:07.038986    4316 command_runner.go:130] ! W0514 00:16:54.239304       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0514 00:18:07.038986    4316 command_runner.go:130] ! I0514 00:16:54.241605       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0514 00:18:07.039071    4316 command_runner.go:130] ! W0514 00:16:54.241703       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.039071    4316 command_runner.go:130] ! W0514 00:16:54.241712       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.039071    4316 command_runner.go:130] ! I0514 00:16:54.242373       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0514 00:18:07.039126    4316 command_runner.go:130] ! W0514 00:16:54.242466       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.039126    4316 command_runner.go:130] ! I0514 00:16:54.259244       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0514 00:18:07.039186    4316 command_runner.go:130] ! W0514 00:16:54.259536       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.039186    4316 command_runner.go:130] ! I0514 00:16:54.792225       1 secure_serving.go:213] Serving securely on [::]:8443
	I0514 00:18:07.039186    4316 command_runner.go:130] ! I0514 00:16:54.792432       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:07.039250    4316 command_runner.go:130] ! I0514 00:16:54.794552       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0514 00:18:07.039311    4316 command_runner.go:130] ! I0514 00:16:54.794677       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0514 00:18:07.039311    4316 command_runner.go:130] ! I0514 00:16:54.794720       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0514 00:18:07.039374    4316 command_runner.go:130] ! I0514 00:16:54.795157       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0514 00:18:07.039374    4316 command_runner.go:130] ! I0514 00:16:54.795787       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0514 00:18:07.039374    4316 command_runner.go:130] ! I0514 00:16:54.795995       1 controller.go:116] Starting legacy_token_tracking_controller
	I0514 00:18:07.039374    4316 command_runner.go:130] ! I0514 00:16:54.796042       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0514 00:18:07.039445    4316 command_runner.go:130] ! I0514 00:16:54.796156       1 controller.go:78] Starting OpenAPI AggregationController
	I0514 00:18:07.039445    4316 command_runner.go:130] ! I0514 00:16:54.796272       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0514 00:18:07.039445    4316 command_runner.go:130] ! I0514 00:16:54.797969       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0514 00:18:07.039511    4316 command_runner.go:130] ! I0514 00:16:54.798688       1 available_controller.go:423] Starting AvailableConditionController
	I0514 00:18:07.039511    4316 command_runner.go:130] ! I0514 00:16:54.798701       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0514 00:18:07.039511    4316 command_runner.go:130] ! I0514 00:16:54.799424       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0514 00:18:07.039572    4316 command_runner.go:130] ! I0514 00:16:54.799667       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0514 00:18:07.039572    4316 command_runner.go:130] ! I0514 00:16:54.799692       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0514 00:18:07.039572    4316 command_runner.go:130] ! I0514 00:16:54.800971       1 aggregator.go:163] waiting for initial CRD sync...
	I0514 00:18:07.039634    4316 command_runner.go:130] ! I0514 00:16:54.792447       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:07.039634    4316 command_runner.go:130] ! I0514 00:16:54.792459       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0514 00:18:07.039694    4316 command_runner.go:130] ! I0514 00:16:54.792473       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:07.039694    4316 command_runner.go:130] ! I0514 00:16:54.812587       1 controller.go:139] Starting OpenAPI controller
	I0514 00:18:07.039694    4316 command_runner.go:130] ! I0514 00:16:54.812611       1 controller.go:87] Starting OpenAPI V3 controller
	I0514 00:18:07.039694    4316 command_runner.go:130] ! I0514 00:16:54.812626       1 naming_controller.go:291] Starting NamingConditionController
	I0514 00:18:07.039757    4316 command_runner.go:130] ! I0514 00:16:54.812640       1 establishing_controller.go:76] Starting EstablishingController
	I0514 00:18:07.039757    4316 command_runner.go:130] ! I0514 00:16:54.812660       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0514 00:18:07.039757    4316 command_runner.go:130] ! I0514 00:16:54.812674       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0514 00:18:07.039817    4316 command_runner.go:130] ! I0514 00:16:54.812685       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0514 00:18:07.039817    4316 command_runner.go:130] ! I0514 00:16:54.848957       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:07.039817    4316 command_runner.go:130] ! I0514 00:16:54.849152       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:07.039879    4316 command_runner.go:130] ! I0514 00:16:54.850275       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0514 00:18:07.039879    4316 command_runner.go:130] ! I0514 00:16:54.850299       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0514 00:18:07.039879    4316 command_runner.go:130] ! I0514 00:16:54.906495       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0514 00:18:07.039939    4316 command_runner.go:130] ! I0514 00:16:54.938841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 00:18:07.039939    4316 command_runner.go:130] ! I0514 00:16:54.950730       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0514 00:18:07.039939    4316 command_runner.go:130] ! I0514 00:16:54.950897       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0514 00:18:07.039939    4316 command_runner.go:130] ! I0514 00:16:54.951294       1 aggregator.go:165] initial CRD sync complete...
	I0514 00:18:07.040002    4316 command_runner.go:130] ! I0514 00:16:54.951545       1 autoregister_controller.go:141] Starting autoregister controller
	I0514 00:18:07.040002    4316 command_runner.go:130] ! I0514 00:16:54.951793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0514 00:18:07.040002    4316 command_runner.go:130] ! I0514 00:16:54.951875       1 cache.go:39] Caches are synced for autoregister controller
	I0514 00:18:07.040063    4316 command_runner.go:130] ! I0514 00:16:54.962299       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0514 00:18:07.040063    4316 command_runner.go:130] ! I0514 00:16:54.968027       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:07.040063    4316 command_runner.go:130] ! I0514 00:16:54.968302       1 policy_source.go:224] refreshing policies
	I0514 00:18:07.040127    4316 command_runner.go:130] ! I0514 00:16:54.997391       1 shared_informer.go:320] Caches are synced for configmaps
	I0514 00:18:07.040127    4316 command_runner.go:130] ! I0514 00:16:54.999391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 00:18:07.040127    4316 command_runner.go:130] ! I0514 00:16:54.999732       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0514 00:18:07.040187    4316 command_runner.go:130] ! I0514 00:16:54.999871       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0514 00:18:07.040187    4316 command_runner.go:130] ! I0514 00:16:55.037244       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 00:18:07.040187    4316 command_runner.go:130] ! I0514 00:16:55.824524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0514 00:18:07.040246    4316 command_runner.go:130] ! W0514 00:16:56.521956       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122 172.23.106.39]
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:56.523614       1 controller.go:615] quota admission added evaluator for: endpoints
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:56.536716       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:57.861026       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:58.068043       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:58.085925       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:58.189328       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:58.200849       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0514 00:18:07.040371    4316 command_runner.go:130] ! W0514 00:17:16.528300       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122]
	I0514 00:18:07.050652    4316 logs.go:123] Gathering logs for kube-controller-manager [b87239d1199a] ...
	I0514 00:18:07.051189    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87239d1199a"
	I0514 00:18:07.074502    4316 command_runner.go:130] ! I0514 00:16:52.414723       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:07.074502    4316 command_runner.go:130] ! I0514 00:16:52.798318       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:07.075367    4316 command_runner.go:130] ! I0514 00:16:52.798456       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:07.075367    4316 command_runner.go:130] ! I0514 00:16:52.802364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:07.075367    4316 command_runner.go:130] ! I0514 00:16:52.802939       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:07.075367    4316 command_runner.go:130] ! I0514 00:16:52.803159       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:52.803510       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.867503       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.868219       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.874269       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.878308       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.878330       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.878409       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.878509       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.878517       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.882632       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.882648       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.882656       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.883478       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.883488       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.883496       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.885766       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.888273       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.888463       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.889304       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.890244       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.890408       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.893619       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.903162       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.903183       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.969340       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.982656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.982729       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:07.075989    4316 command_runner.go:130] ! I0514 00:16:56.983268       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:07.075989    4316 command_runner.go:130] ! I0514 00:16:56.983299       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:07.075989    4316 command_runner.go:130] ! I0514 00:16:56.983354       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:07.076111    4316 command_runner.go:130] ! I0514 00:16:56.983426       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:07.076111    4316 command_runner.go:130] ! I0514 00:16:56.983451       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:07.076111    4316 command_runner.go:130] ! W0514 00:16:56.983466       1 shared_informer.go:597] resyncPeriod 15h46m20.096782659s is smaller than resyncCheckPeriod 18h37m10.298700604s and the informer has already started. Changing it to 18h37m10.298700604s
	I0514 00:18:07.076111    4316 command_runner.go:130] ! I0514 00:16:56.983922       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:07.076226    4316 command_runner.go:130] ! I0514 00:16:56.984377       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:07.076226    4316 command_runner.go:130] ! I0514 00:16:56.984435       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:07.076226    4316 command_runner.go:130] ! I0514 00:16:56.984460       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:07.076299    4316 command_runner.go:130] ! I0514 00:16:56.984478       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:07.076299    4316 command_runner.go:130] ! I0514 00:16:56.984528       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:07.076377    4316 command_runner.go:130] ! I0514 00:16:56.984568       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:07.076377    4316 command_runner.go:130] ! I0514 00:16:56.984736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:07.076377    4316 command_runner.go:130] ! I0514 00:16:56.985288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:07.076473    4316 command_runner.go:130] ! I0514 00:16:56.995607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:07.076506    4316 command_runner.go:130] ! I0514 00:16:56.996188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:07.076538    4316 command_runner.go:130] ! I0514 00:16:56.997004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:07.076577    4316 command_runner.go:130] ! I0514 00:16:56.997141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:07.076627    4316 command_runner.go:130] ! I0514 00:16:56.997174       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:07.076627    4316 command_runner.go:130] ! I0514 00:16:56.997363       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:07.076669    4316 command_runner.go:130] ! I0514 00:16:56.997373       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:07.076669    4316 command_runner.go:130] ! I0514 00:16:57.003479       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:07.076669    4316 command_runner.go:130] ! I0514 00:16:57.004086       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:07.076739    4316 command_runner.go:130] ! I0514 00:16:57.004336       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:07.076739    4316 command_runner.go:130] ! I0514 00:16:57.004348       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:07.076812    4316 command_runner.go:130] ! I0514 00:17:07.031733       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:07.076812    4316 command_runner.go:130] ! I0514 00:17:07.032143       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:07.076812    4316 command_runner.go:130] ! I0514 00:17:07.032242       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:07.076812    4316 command_runner.go:130] ! I0514 00:17:07.032648       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:07.076911    4316 command_runner.go:130] ! I0514 00:17:07.034995       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:07.076911    4316 command_runner.go:130] ! I0514 00:17:07.035109       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:07.076996    4316 command_runner.go:130] ! I0514 00:17:07.035510       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.035544       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.035551       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.038183       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.038394       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.039212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.040784       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.041050       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.041194       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.043909       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.044044       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.044106       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.059101       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.059352       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.059503       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.062189       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.062615       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.062641       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.070971       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.071021       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.071151       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.071293       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.071328       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.071388       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.083342       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.084321       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.084474       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.085952       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:07.077570    4316 command_runner.go:130] ! I0514 00:17:07.086347       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:07.077570    4316 command_runner.go:130] ! I0514 00:17:07.086569       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:07.077570    4316 command_runner.go:130] ! I0514 00:17:07.088414       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:07.077570    4316 command_runner.go:130] ! I0514 00:17:07.088731       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:07.077671    4316 command_runner.go:130] ! I0514 00:17:07.089444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:07.077671    4316 command_runner.go:130] ! I0514 00:17:07.091486       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:07.077671    4316 command_runner.go:130] ! I0514 00:17:07.091650       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:07.077671    4316 command_runner.go:130] ! I0514 00:17:07.091678       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:07.077754    4316 command_runner.go:130] ! I0514 00:17:07.094570       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:07.077754    4316 command_runner.go:130] ! I0514 00:17:07.095467       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:07.077754    4316 command_runner.go:130] ! I0514 00:17:07.095818       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:07.077835    4316 command_runner.go:130] ! I0514 00:17:07.097778       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:07.077835    4316 command_runner.go:130] ! I0514 00:17:07.098911       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:07.077835    4316 command_runner.go:130] ! I0514 00:17:07.098939       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:07.077835    4316 command_runner.go:130] ! I0514 00:17:07.100648       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:07.077835    4316 command_runner.go:130] ! I0514 00:17:07.101514       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:07.077909    4316 command_runner.go:130] ! I0514 00:17:07.101659       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:07.077909    4316 command_runner.go:130] ! I0514 00:17:07.103436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:07.077909    4316 command_runner.go:130] ! I0514 00:17:07.103908       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:07.077909    4316 command_runner.go:130] ! I0514 00:17:07.109194       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:07.077981    4316 command_runner.go:130] ! I0514 00:17:07.109267       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:07.077981    4316 command_runner.go:130] ! I0514 00:17:07.109496       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:07.077981    4316 command_runner.go:130] ! I0514 00:17:07.113760       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:07.078032    4316 command_runner.go:130] ! I0514 00:17:07.114024       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:07.078032    4316 command_runner.go:130] ! I0514 00:17:07.114252       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:07.078032    4316 command_runner.go:130] ! I0514 00:17:07.115259       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:07.078075    4316 command_runner.go:130] ! I0514 00:17:07.116925       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:07.078075    4316 command_runner.go:130] ! I0514 00:17:07.117254       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:07.078075    4316 command_runner.go:130] ! I0514 00:17:07.117353       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:07.078075    4316 command_runner.go:130] ! I0514 00:17:07.121368       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:07.078075    4316 command_runner.go:130] ! I0514 00:17:07.121764       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:07.078163    4316 command_runner.go:130] ! I0514 00:17:07.121788       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:07.078182    4316 command_runner.go:130] ! I0514 00:17:07.122128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:07.078182    4316 command_runner.go:130] ! I0514 00:17:07.122156       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:07.078182    4316 command_runner.go:130] ! I0514 00:17:07.122248       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:07.078266    4316 command_runner.go:130] ! I0514 00:17:07.122301       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:07.078266    4316 command_runner.go:130] ! I0514 00:17:07.122371       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:07.078317    4316 command_runner.go:130] ! I0514 00:17:07.122432       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:07.078317    4316 command_runner.go:130] ! I0514 00:17:07.122464       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:07.078317    4316 command_runner.go:130] ! I0514 00:17:07.122706       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:07.078369    4316 command_runner.go:130] ! I0514 00:17:07.123282       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:07.078369    4316 command_runner.go:130] ! I0514 00:17:07.123678       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:07.078369    4316 command_runner.go:130] ! I0514 00:17:07.126535       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:07.078369    4316 command_runner.go:130] ! I0514 00:17:07.126692       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:07.078369    4316 command_runner.go:130] ! E0514 00:17:07.165594       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:07.078369    4316 command_runner.go:130] ! I0514 00:17:07.165634       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:07.078463    4316 command_runner.go:130] ! I0514 00:17:07.218097       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:07.078463    4316 command_runner.go:130] ! I0514 00:17:07.218271       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.218379       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.218721       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.265917       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.266033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.266045       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.315398       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.315511       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.315534       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.415899       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.416022       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.465981       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.466026       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.466177       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.466545       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.516337       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.516498       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.516515       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.567477       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.567616       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.567627       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.617346       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.617464       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.617476       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:07.078517    4316 command_runner.go:130] ! E0514 00:17:07.665765       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.665865       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.665876       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.671623       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.693623       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.703208       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.707002       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.707898       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.708010       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.708168       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.710800       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.710879       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.716140       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.716709       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.717695       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.717710       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.718924       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.723267       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.723378       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.723467       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.723495       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.726980       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.733271       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.733445       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.733467       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.733473       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:07.079168    4316 command_runner.go:130] ! I0514 00:17:07.733480       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:07.079168    4316 command_runner.go:130] ! I0514 00:17:07.739996       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:07.079168    4316 command_runner.go:130] ! I0514 00:17:07.742032       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.744959       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.760453       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.762790       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.766175       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.767750       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.768151       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.779225       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.779406       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.784902       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.787441       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.790178       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.791571       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.797318       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.816750       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.836762       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.837127       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:07.079413    4316 command_runner.go:130] ! I0514 00:17:07.869081       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:07.079413    4316 command_runner.go:130] ! I0514 00:17:07.869544       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:07.079808    4316 command_runner.go:130] ! I0514 00:17:07.869413       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:07.079808    4316 command_runner.go:130] ! I0514 00:17:07.870789       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:07.079808    4316 command_runner.go:130] ! I0514 00:17:07.898670       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:07.079881    4316 command_runner.go:130] ! I0514 00:17:07.901033       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:07.079881    4316 command_runner.go:130] ! I0514 00:17:07.904366       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:07.079911    4316 command_runner.go:130] ! I0514 00:17:07.916125       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:07.079911    4316 command_runner.go:130] ! I0514 00:17:07.977330       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:07.079950    4316 command_runner.go:130] ! I0514 00:17:07.988956       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:07.079950    4316 command_runner.go:130] ! I0514 00:17:08.134754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="230.307102ms"
	I0514 00:18:07.079950    4316 command_runner.go:130] ! I0514 00:17:08.134896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.6µs"
	I0514 00:18:07.079992    4316 command_runner.go:130] ! I0514 00:17:08.140785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="234.508146ms"
	I0514 00:18:07.080010    4316 command_runner.go:130] ! I0514 00:17:08.140977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.3µs"
	I0514 00:18:07.080010    4316 command_runner.go:130] ! I0514 00:17:08.412419       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:07.080010    4316 command_runner.go:130] ! I0514 00:17:08.472034       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:07.080010    4316 command_runner.go:130] ! I0514 00:17:08.472384       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:07.080099    4316 command_runner.go:130] ! I0514 00:17:37.878702       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:07.080099    4316 command_runner.go:130] ! I0514 00:18:01.608725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.75856ms"
	I0514 00:18:07.080124    4316 command_runner.go:130] ! I0514 00:18:01.608844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.702µs"
	I0514 00:18:07.080124    4316 command_runner.go:130] ! I0514 00:18:01.651304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.008µs"
	I0514 00:18:07.080124    4316 command_runner.go:130] ! I0514 00:18:01.710123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.783088ms"
	I0514 00:18:07.080185    4316 command_runner.go:130] ! I0514 00:18:01.711762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.302µs"
	I0514 00:18:07.093635    4316 logs.go:123] Gathering logs for Docker ...
	I0514 00:18:07.093635    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123367    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:07.123367    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:07.123367    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:07.123367    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:07.123418    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:07.123418    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:07.123418    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:07.123418    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123418    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0514 00:18:07.123489    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123489    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:07.123489    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:07.123532    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:07.123532    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:07.123532    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:07.123532    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:07.123532    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123670    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:07.123670    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349024460Z" level=info msg="Starting up"
	I0514 00:18:07.123670    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349886331Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:07.123670    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.351031392Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0514 00:18:07.123739    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.380428255Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:07.123739    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407060046Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:07.123790    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407104860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:07.123790    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407157277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:07.123790    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407182685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.123861    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408093872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.123861    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408200005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.123924    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408421875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.123924    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408522107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.123978    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408552116Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:07.123978    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408565820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.123978    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409126597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.124030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409855027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.124030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412841968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.124091    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412982412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.124140    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413109352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.124140    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413195779Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:07.124140    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414192994Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:07.124140    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414303628Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:07.124140    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414321234Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:07.124237    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420644226Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:07.124237    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420793973Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:07.124237    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420815380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:07.124237    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420835086Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:07.124302    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420849391Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:07.124302    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421006640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:07.124340    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421303834Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:07.124340    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421395163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:07.124453    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421479890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:07.124453    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421494994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:07.124548    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421507198Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124586    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421523703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124622    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421540509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124622    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421554613Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124691    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421571518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124691    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421584022Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124691    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421594526Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124760    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421604629Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124760    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421626336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124760    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421639040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124817    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421651344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124817    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421662947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124817    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421673350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124868    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421684554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124868    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421695257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124916    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421705961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124916    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421717564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124916    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421730268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124967    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421774782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124967    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421787286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.125030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421797990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.125030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421811094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:07.125030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421828299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.125082    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421838703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.125082    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421849206Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:07.125082    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421898721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:07.125144    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421926330Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:07.125144    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421987549Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422004755Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422070276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422106987Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422118891Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422453196Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422571233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422619148Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422687970Z" level=info msg="containerd successfully booted in 0.044863s"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.404653025Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.578701970Z" level=info msg="Loading containers: start."
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.027152626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.105905244Z" level=info msg="Loading containers: done."
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.135340666Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.136139953Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.185948604Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.186071317Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 systemd[1]: Stopping Docker Application Container Engine...
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.988898314Z" level=info msg="Processing signal 'terminated'"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.989838579Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990583130Z" level=info msg="Daemon shutdown complete"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990661536Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990696238Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: docker.service: Deactivated successfully.
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: Stopped Docker Application Container Engine.
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.059729298Z" level=info msg="Starting up"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.060541955Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.061850245Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1055
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.092613476Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115368453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115403155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:07.125735    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115435257Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:07.125735    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115450359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125787    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115473760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.125787    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115486261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125787    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115635771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.125849    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115738478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125849    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115756280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:07.125901    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115766280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125901    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115789882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125949    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.116031099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125949    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119790059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.126002    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119888566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.126002    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120181886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.126050    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120287794Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:07.126050    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120385900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:07.126103    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120406702Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:07.126103    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120419603Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:07.126103    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120713023Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:07.126151    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120746825Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:07.126151    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120760126Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:07.126151    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120773227Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:07.126203    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120785328Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:07.126203    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120826831Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:07.126250    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120999543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:07.126250    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121054147Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:07.126250    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121092049Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:07.126303    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121102050Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:07.126303    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121115951Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126303    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121126152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126349    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121135052Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126349    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121145153Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121156354Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121165854Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121175255Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121184656Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121204657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121216358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121225759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121235159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121243960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121254361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121263161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121275762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121287763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121299564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121364668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121378369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121388070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121400871Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121421772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121432873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121442174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121474076Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121485477Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121493977Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121504178Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121548581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121558382Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121570783Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121732894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121765696Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121795498Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:07.126936    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121808099Z" level=info msg="containerd successfully booted in 0.031442s"
	I0514 00:18:07.126936    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.110784113Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:07.126936    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.142577516Z" level=info msg="Loading containers: start."
	I0514 00:18:07.126986    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.405628939Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:07.126986    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.480865351Z" level=info msg="Loading containers: done."
	I0514 00:18:07.126986    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503621028Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:07.126986    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503703734Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:07.127051    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545253312Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:07.127051    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545312016Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:07.127051    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:07.127051    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:07.127102    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:07.127102    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:07.127102    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:07.127102    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0514 00:18:07.127166    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Loaded network plugin cni"
	I0514 00:18:07.127166    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0514 00:18:07.127166    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0514 00:18:07.127222    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0514 00:18:07.127222    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0514 00:18:07.127255    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start cri-dockerd grpc backend"
	I0514 00:18:07.127255    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.127291    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xqj6w_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2\""
	I0514 00:18:07.127358    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4kmx4_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697\""
	I0514 00:18:07.127397    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717439407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.127397    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717535614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.127432    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717551915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127465    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.718214261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127501    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720663031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.127501    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720923549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.127533    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721017455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721295774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783128658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.127668    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783344773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.127668    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783450280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127704    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783657895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127736    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816093342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.127772    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816151946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.127772    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816166547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127804    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816251853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127840    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ddcaadef980aca40a7740fe7c59949c3cb803d9fb441eca155b02162f3422bb8/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.127872    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/659643d47b9ae231a8b97d9871cab6dfac5f6d06e647c919d14170832ee47683/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.127939    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.127939    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.127975    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.258935521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128013    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.259980593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128013    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260187008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128051    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260361520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128083    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272553064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128120    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272771779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128153    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272798781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128189    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272907589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128189    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314782590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128227    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314905098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128264    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314946601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128264    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.315263523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128302    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.385829312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128338    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386016625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128338    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386135333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128377    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386495758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128413    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:55Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0514 00:18:07.128446    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444453862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128481    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444531867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128481    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444549969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128520    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444647976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128557    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.461909471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128557    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462106685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128589    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462142187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128625    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462265196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128657    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492511091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128694    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492965923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128694    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493135035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128727    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493390352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128763    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8ac60a565998ca52581e38272f2fcdb5f7038023f93d728cd74f5b89f5593ed/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.128794    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/468a0e2976ae45a571a99afabfcd1329c76873e973179fe56cc9ef46e2533698/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.128839    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849392115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128878    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849539826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128921    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849623331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128959    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849861048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128996    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857219658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129028    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857468675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129058    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857687390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129105    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.858016113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129140    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5233e076edceb93931d756579982e556959dfd31508760da215a8407dca14e56/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218178264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218325574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218348976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218459383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:17.430189771Z" level=info msg="ignoring event" container=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431460316Z" level=info msg="shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431869631Z" level=warning msg="cleaning up after shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.432007736Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:27.281698284Z" level=info msg="ignoring event" container=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.282877145Z" level=info msg="shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283000451Z" level=warning msg="cleaning up after shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283015352Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.098999177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099271791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099326694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099641511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.092603581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093732951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093768053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.095427255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129710    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235051362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129710    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235156269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129747    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235169170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129747    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235258576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235645702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235713507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235730808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235828014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743900500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743970305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.744406335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.745139484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808545660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808756974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808962988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.809189903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130326    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130326    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130363    4316 command_runner.go:130] > May 14 00:18:06 multinode-101100 dockerd[1049]: 2024/05/14 00:18:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:06 multinode-101100 dockerd[1049]: 2024/05/14 00:18:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.160832    4316 logs.go:123] Gathering logs for describe nodes ...
	I0514 00:18:07.160832    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0514 00:18:07.337736    4316 command_runner.go:130] > Name:               multinode-101100
	I0514 00:18:07.337736    4316 command_runner.go:130] > Roles:              control-plane
	I0514 00:18:07.337736    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_56_10_0700
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0514 00:18:07.337736    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:07.337736    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:56:06 +0000
	I0514 00:18:07.337736    4316 command_runner.go:130] > Taints:             <none>
	I0514 00:18:07.337736    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:07.337736    4316 command_runner.go:130] > Lease:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100
	I0514 00:18:07.337736    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:07.337736    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:18:06 +0000
	I0514 00:18:07.337736    4316 command_runner.go:130] > Conditions:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0514 00:18:07.337736    4316 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0514 00:18:07.337736    4316 command_runner.go:130] >   MemoryPressure   False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0514 00:18:07.337736    4316 command_runner.go:130] >   DiskPressure     False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0514 00:18:07.337736    4316 command_runner.go:130] >   PIDPressure      False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Ready            True    Tue, 14 May 2024 00:17:35 +0000   Tue, 14 May 2024 00:17:35 +0000   KubeletReady                 kubelet is posting ready status
	I0514 00:18:07.337736    4316 command_runner.go:130] > Addresses:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   InternalIP:  172.23.102.122
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Hostname:    multinode-101100
	I0514 00:18:07.337736    4316 command_runner.go:130] > Capacity:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.337736    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.337736    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.337736    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.337736    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.337736    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.337736    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.337736    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.337736    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.337736    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.337736    4316 command_runner.go:130] > System Info:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Machine ID:                 5110a322e7104904905e303a94b950b6
	I0514 00:18:07.337736    4316 command_runner.go:130] >   System UUID:                9b23fe4d-6d34-444b-8185-a84d51d23610
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Boot ID:                    2e73d191-2dbe-4055-a17d-cff8a9e53a15
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:07.337736    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:07.338804    4316 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0514 00:18:07.338804    4316 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0514 00:18:07.338804    4316 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0514 00:18:07.338804    4316 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:07.338804    4316 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0514 00:18:07.338804    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-xqj6w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:07.338804    4316 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4kmx4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	I0514 00:18:07.338804    4316 command_runner.go:130] >   kube-system                 etcd-multinode-101100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	I0514 00:18:07.338804    4316 command_runner.go:130] >   kube-system                 kindnet-9q2tv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0514 00:18:07.338938    4316 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-101100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	I0514 00:18:07.338938    4316 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-101100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:07.338938    4316 command_runner.go:130] >   kube-system                 kube-proxy-zhcz6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:07.338938    4316 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-101100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:07.338938    4316 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:07.338938    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:07.338938    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:07.339051    4316 command_runner.go:130] >   Resource           Requests     Limits
	I0514 00:18:07.339051    4316 command_runner.go:130] >   --------           --------     ------
	I0514 00:18:07.339051    4316 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0514 00:18:07.339051    4316 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0514 00:18:07.339051    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:07.339051    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:07.339051    4316 command_runner.go:130] > Events:
	I0514 00:18:07.339051    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:07.339051    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:07.339051    4316 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0514 00:18:07.339051    4316 command_runner.go:130] >   Normal  Starting                 69s                kube-proxy       
	I0514 00:18:07.339051    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:07.339185    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.339185    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:07.339185    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.339185    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m                kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m                kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m                kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  Starting                 21m                kubelet          Starting kubelet.
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-101100 status is now: NodeReady
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.339378    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  77s (x8 over 78s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:07.339378    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    77s (x8 over 78s)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.339378    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     77s (x7 over 78s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:07.339433    4316 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:07.339433    4316 command_runner.go:130] > Name:               multinode-101100-m02
	I0514 00:18:07.339433    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:07.339433    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:07.339433    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:07.339433    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:07.339433    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m02
	I0514 00:18:07.339525    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:07.339525    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:07.339525    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:07.339525    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:07.339592    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_59_02_0700
	I0514 00:18:07.339592    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:07.339592    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:07.339592    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:07.339592    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:07.339687    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:59:02 +0000
	I0514 00:18:07.339687    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:07.339687    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:07.339687    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:07.339687    4316 command_runner.go:130] > Lease:
	I0514 00:18:07.339687    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m02
	I0514 00:18:07.339687    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:07.339687    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:52 +0000
	I0514 00:18:07.339687    4316 command_runner.go:130] > Conditions:
	I0514 00:18:07.339687    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:07.339687    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:07.339827    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.339827    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.339827    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.339827    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.339827    4316 command_runner.go:130] > Addresses:
	I0514 00:18:07.339827    4316 command_runner.go:130] >   InternalIP:  172.23.109.58
	I0514 00:18:07.339827    4316 command_runner.go:130] >   Hostname:    multinode-101100-m02
	I0514 00:18:07.339827    4316 command_runner.go:130] > Capacity:
	I0514 00:18:07.339827    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.339827    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.339827    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.339943    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.339943    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.339943    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:07.339943    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.339943    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.339943    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.339943    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.339943    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.339943    4316 command_runner.go:130] > System Info:
	I0514 00:18:07.339943    4316 command_runner.go:130] >   Machine ID:                 8d348bb1bbc048f4b99c681873b42d63
	I0514 00:18:07.339943    4316 command_runner.go:130] >   System UUID:                4330851b-5248-f245-9378-5fc25e670b55
	I0514 00:18:07.339943    4316 command_runner.go:130] >   Boot ID:                    9f102be6-1468-4570-8696-97e5ce51649a
	I0514 00:18:07.339943    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:07.339943    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:07.339943    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:07.340067    4316 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0514 00:18:07.340067    4316 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0514 00:18:07.340067    4316 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:07.340067    4316 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0514 00:18:07.340067    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-q7442    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:07.340067    4316 command_runner.go:130] >   kube-system                 kindnet-2lwsm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	I0514 00:18:07.340067    4316 command_runner.go:130] >   kube-system                 kube-proxy-b25hq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0514 00:18:07.340067    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:07.340067    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:07.340225    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:07.340225    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:07.340225    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:07.340277    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:07.340277    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:07.340277    4316 command_runner.go:130] > Events:
	I0514 00:18:07.340277    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:07.340277    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:07.340277    4316 command_runner.go:130] >   Normal  Starting                 18m                kube-proxy       
	I0514 00:18:07.340277    4316 command_runner.go:130] >   Normal  RegisteredNode           19m                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:07.340277    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	I0514 00:18:07.340277    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.340399    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	I0514 00:18:07.340399    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.340431    4316 command_runner.go:130] >   Normal  NodeReady                18m                kubelet          Node multinode-101100-m02 status is now: NodeReady
	I0514 00:18:07.340481    4316 command_runner.go:130] >   Normal  NodeNotReady             3m35s              node-controller  Node multinode-101100-m02 status is now: NodeNotReady
	I0514 00:18:07.340481    4316 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:07.340481    4316 command_runner.go:130] > Name:               multinode-101100-m03
	I0514 00:18:07.340545    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:07.340545    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:07.340545    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:07.340545    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:07.340545    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m03
	I0514 00:18:07.340545    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:07.340616    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:07.340616    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:07.340616    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:07.340616    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_14T00_12_45_0700
	I0514 00:18:07.340679    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:07.340679    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:07.340679    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:07.340747    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:07.340747    4316 command_runner.go:130] > CreationTimestamp:  Tue, 14 May 2024 00:12:44 +0000
	I0514 00:18:07.340747    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:07.340747    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:07.340747    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:07.340812    4316 command_runner.go:130] > Lease:
	I0514 00:18:07.340812    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m03
	I0514 00:18:07.340812    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:07.340812    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:36 +0000
	I0514 00:18:07.340812    4316 command_runner.go:130] > Conditions:
	I0514 00:18:07.340812    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:07.340882    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:07.340882    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.340945    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.340945    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.340945    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.340945    4316 command_runner.go:130] > Addresses:
	I0514 00:18:07.340945    4316 command_runner.go:130] >   InternalIP:  172.23.102.231
	I0514 00:18:07.340945    4316 command_runner.go:130] >   Hostname:    multinode-101100-m03
	I0514 00:18:07.341024    4316 command_runner.go:130] > Capacity:
	I0514 00:18:07.341024    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.341024    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.341024    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.341082    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.341082    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.341082    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:07.341082    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.341082    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.341082    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.341082    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.341152    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.341152    4316 command_runner.go:130] > System Info:
	I0514 00:18:07.341152    4316 command_runner.go:130] >   Machine ID:                 11c3fac528de4278b1dafef49e54ea09
	I0514 00:18:07.341152    4316 command_runner.go:130] >   System UUID:                0ee228e5-87a6-0549-9a8d-1747b73431ee
	I0514 00:18:07.341215    4316 command_runner.go:130] >   Boot ID:                    d5c1e04c-3081-4871-912e-a86507b8e24a
	I0514 00:18:07.341215    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:07.341215    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:07.341215    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:07.341215    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:07.341275    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:07.341275    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:07.341275    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:07.341275    4316 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0514 00:18:07.341275    4316 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0514 00:18:07.341275    4316 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0514 00:18:07.341341    4316 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:07.341341    4316 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0514 00:18:07.341341    4316 command_runner.go:130] >   kube-system                 kindnet-tfbt8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	I0514 00:18:07.341410    4316 command_runner.go:130] >   kube-system                 kube-proxy-8zsgn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	I0514 00:18:07.341410    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:07.341410    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:07.341410    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:07.341410    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:07.341475    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:07.341545    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:07.341545    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:07.341545    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:07.341545    4316 command_runner.go:130] > Events:
	I0514 00:18:07.341545    4316 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0514 00:18:07.341609    4316 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0514 00:18:07.341609    4316 command_runner.go:130] >   Normal  Starting                 5m19s                  kube-proxy       
	I0514 00:18:07.341609    4316 command_runner.go:130] >   Normal  Starting                 14m                    kube-proxy       
	I0514 00:18:07.341609    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.341680    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:07.341680    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.341680    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:07.341748    4316 command_runner.go:130] >   Normal  NodeReady                14m                    kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:07.341748    4316 command_runner.go:130] >   Normal  Starting                 5m23s                  kubelet          Starting kubelet.
	I0514 00:18:07.341792    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m23s (x2 over 5m23s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:07.341792    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m23s (x2 over 5m23s)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.341857    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m23s (x2 over 5m23s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:07.341857    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.341901    4316 command_runner.go:130] >   Normal  RegisteredNode           5m20s                  node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:07.341949    4316 command_runner.go:130] >   Normal  NodeReady                5m18s                  kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:07.341949    4316 command_runner.go:130] >   Normal  NodeNotReady             3m50s                  node-controller  Node multinode-101100-m03 status is now: NodeNotReady
	I0514 00:18:07.341990    4316 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:07.351337    4316 logs.go:123] Gathering logs for kube-proxy [91edaaa00da2] ...
	I0514 00:18:07.351337    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91edaaa00da2"
	I0514 00:18:07.381927    4316 command_runner.go:130] ! I0513 23:56:24.901713       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:07.382210    4316 command_runner.go:130] ! I0513 23:56:24.929714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.106.39"]
	I0514 00:18:07.382447    4316 command_runner.go:130] ! I0513 23:56:24.982680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:07.382447    4316 command_runner.go:130] ! I0513 23:56:24.982795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:07.382447    4316 command_runner.go:130] ! I0513 23:56:24.982816       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:07.382563    4316 command_runner.go:130] ! I0513 23:56:24.988669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:07.382630    4316 command_runner.go:130] ! I0513 23:56:24.989566       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:07.382697    4316 command_runner.go:130] ! I0513 23:56:24.989671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:07.382697    4316 command_runner.go:130] ! I0513 23:56:24.992700       1 config.go:192] "Starting service config controller"
	I0514 00:18:07.382697    4316 command_runner.go:130] ! I0513 23:56:24.993131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:07.382793    4316 command_runner.go:130] ! I0513 23:56:24.993327       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:07.382793    4316 command_runner.go:130] ! I0513 23:56:24.993339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:07.382793    4316 command_runner.go:130] ! I0513 23:56:24.994714       1 config.go:319] "Starting node config controller"
	I0514 00:18:07.382913    4316 command_runner.go:130] ! I0513 23:56:24.994744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:07.382913    4316 command_runner.go:130] ! I0513 23:56:25.094420       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:07.382913    4316 command_runner.go:130] ! I0513 23:56:25.094530       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:07.383027    4316 command_runner.go:130] ! I0513 23:56:25.094981       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:07.385779    4316 logs.go:123] Gathering logs for kindnet [2b424a7cd98c] ...
	I0514 00:18:07.385837    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b424a7cd98c"
	I0514 00:18:07.409610    4316 command_runner.go:130] ! I0514 00:17:28.349800       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:07.409610    4316 command_runner.go:130] ! I0514 00:17:28.349935       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:07.409610    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:07.410591    4316 command_runner.go:130] ! I0514 00:17:28.441282       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:07.410591    4316 command_runner.go:130] ! I0514 00:17:28.441413       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:07.410591    4316 command_runner.go:130] ! I0514 00:17:28.441441       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:07.410634    4316 command_runner.go:130] ! I0514 00:17:29.045047       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:07.410634    4316 command_runner.go:130] ! I0514 00:17:29.045110       1 main.go:227] handling current node
	I0514 00:18:07.410634    4316 command_runner.go:130] ! I0514 00:17:29.045545       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:07.410634    4316 command_runner.go:130] ! I0514 00:17:29.045580       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:07.410683    4316 command_runner.go:130] ! I0514 00:17:29.045839       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.109.58 Flags: [] Table: 0} 
	I0514 00:18:07.410683    4316 command_runner.go:130] ! I0514 00:17:29.045983       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:07.410683    4316 command_runner.go:130] ! I0514 00:17:29.045993       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:07.410722    4316 command_runner.go:130] ! I0514 00:17:29.046039       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.102.231 Flags: [] Table: 0} 
	I0514 00:18:07.410722    4316 command_runner.go:130] ! I0514 00:17:39.055904       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:07.410774    4316 command_runner.go:130] ! I0514 00:17:39.056127       1 main.go:227] handling current node
	I0514 00:18:07.410774    4316 command_runner.go:130] ! I0514 00:17:39.056141       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:07.410774    4316 command_runner.go:130] ! I0514 00:17:39.056155       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:07.410820    4316 command_runner.go:130] ! I0514 00:17:39.056412       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:07.410820    4316 command_runner.go:130] ! I0514 00:17:39.056502       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:07.410820    4316 command_runner.go:130] ! I0514 00:17:49.062369       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:07.410820    4316 command_runner.go:130] ! I0514 00:17:49.062453       1 main.go:227] handling current node
	I0514 00:18:07.410868    4316 command_runner.go:130] ! I0514 00:17:49.062465       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:07.410868    4316 command_runner.go:130] ! I0514 00:17:49.062483       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:07.410868    4316 command_runner.go:130] ! I0514 00:17:49.062816       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:07.410914    4316 command_runner.go:130] ! I0514 00:17:49.062843       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:07.410914    4316 command_runner.go:130] ! I0514 00:17:59.075229       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:07.410914    4316 command_runner.go:130] ! I0514 00:17:59.075506       1 main.go:227] handling current node
	I0514 00:18:07.410914    4316 command_runner.go:130] ! I0514 00:17:59.075588       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:07.410962    4316 command_runner.go:130] ! I0514 00:17:59.075650       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:07.410962    4316 command_runner.go:130] ! I0514 00:17:59.075827       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:07.410962    4316 command_runner.go:130] ! I0514 00:17:59.075835       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:09.927617    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:18:09.936837    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 200:
	ok
	I0514 00:18:09.937043    4316 round_trippers.go:463] GET https://172.23.102.122:8443/version
	I0514 00:18:09.937043    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:09.937043    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:09.937159    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:09.938884    4316 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0514 00:18:09.938884    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:09.938884    4316 round_trippers.go:580]     Content-Length: 263
	I0514 00:18:09.938884    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:10 GMT
	I0514 00:18:09.939089    4316 round_trippers.go:580]     Audit-Id: e22436c5-0691-4fc9-a5ea-405f5ed5ffca
	I0514 00:18:09.939089    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:09.939089    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:09.939089    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:09.939089    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:09.939089    4316 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0514 00:18:09.939089    4316 api_server.go:141] control plane version: v1.30.0
	I0514 00:18:09.939199    4316 api_server.go:131] duration metric: took 3.5675531s to wait for apiserver health ...
	I0514 00:18:09.939199    4316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 00:18:09.945769    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0514 00:18:09.967876    4316 command_runner.go:130] > da9e6534cd87
	I0514 00:18:09.968989    4316 logs.go:276] 1 containers: [da9e6534cd87]
	I0514 00:18:09.975518    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0514 00:18:09.994526    4316 command_runner.go:130] > 08450c853590
	I0514 00:18:09.994974    4316 logs.go:276] 1 containers: [08450c853590]
	I0514 00:18:10.001317    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0514 00:18:10.021786    4316 command_runner.go:130] > dcc5a109288b
	I0514 00:18:10.021786    4316 command_runner.go:130] > 76c5ab7859ef
	I0514 00:18:10.023439    4316 logs.go:276] 2 containers: [dcc5a109288b 76c5ab7859ef]
	I0514 00:18:10.034318    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0514 00:18:10.057359    4316 command_runner.go:130] > d3581c1c570c
	I0514 00:18:10.057461    4316 command_runner.go:130] > 964887fc5d36
	I0514 00:18:10.058059    4316 logs.go:276] 2 containers: [d3581c1c570c 964887fc5d36]
	I0514 00:18:10.065779    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0514 00:18:10.088604    4316 command_runner.go:130] > b2a1b31cd7de
	I0514 00:18:10.088887    4316 command_runner.go:130] > 91edaaa00da2
	I0514 00:18:10.088947    4316 logs.go:276] 2 containers: [b2a1b31cd7de 91edaaa00da2]
	I0514 00:18:10.097362    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0514 00:18:10.128764    4316 command_runner.go:130] > b87239d1199a
	I0514 00:18:10.128764    4316 command_runner.go:130] > e96f94398d6d
	I0514 00:18:10.128764    4316 logs.go:276] 2 containers: [b87239d1199a e96f94398d6d]
	I0514 00:18:10.137628    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0514 00:18:10.158775    4316 command_runner.go:130] > 2b424a7cd98c
	I0514 00:18:10.158775    4316 command_runner.go:130] > b7d8d9a5e5ea
	I0514 00:18:10.160257    4316 logs.go:276] 2 containers: [2b424a7cd98c b7d8d9a5e5ea]
	I0514 00:18:10.160343    4316 logs.go:123] Gathering logs for kindnet [b7d8d9a5e5ea] ...
	I0514 00:18:10.160343    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d8d9a5e5ea"
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:16:57.751233       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:16:57.751585       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:10.192439    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:16:57.752181       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:16:57.752200       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:16:57.752221       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:17:01.123977       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:17:04.195495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:17:07.267636       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:17:10.339619       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192828    4316 command_runner.go:130] ! I0514 00:17:13.411801       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192859    4316 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192859    4316 command_runner.go:130] ! goroutine 1 [running]:
	I0514 00:18:10.192859    4316 command_runner.go:130] ! main.main()
	I0514 00:18:10.192859    4316 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0514 00:18:10.195416    4316 logs.go:123] Gathering logs for describe nodes ...
	I0514 00:18:10.195416    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0514 00:18:10.385028    4316 command_runner.go:130] > Name:               multinode-101100
	I0514 00:18:10.385028    4316 command_runner.go:130] > Roles:              control-plane
	I0514 00:18:10.385028    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_56_10_0700
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0514 00:18:10.385266    4316 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0514 00:18:10.385266    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:10.385266    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:10.385266    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:10.385266    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:56:06 +0000
	I0514 00:18:10.385266    4316 command_runner.go:130] > Taints:             <none>
	I0514 00:18:10.385266    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:10.385266    4316 command_runner.go:130] > Lease:
	I0514 00:18:10.385339    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100
	I0514 00:18:10.385339    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:10.385339    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:18:06 +0000
	I0514 00:18:10.385339    4316 command_runner.go:130] > Conditions:
	I0514 00:18:10.385339    4316 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0514 00:18:10.385389    4316 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0514 00:18:10.385389    4316 command_runner.go:130] >   MemoryPressure   False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0514 00:18:10.385389    4316 command_runner.go:130] >   DiskPressure     False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0514 00:18:10.385389    4316 command_runner.go:130] >   PIDPressure      False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0514 00:18:10.385389    4316 command_runner.go:130] >   Ready            True    Tue, 14 May 2024 00:17:35 +0000   Tue, 14 May 2024 00:17:35 +0000   KubeletReady                 kubelet is posting ready status
	I0514 00:18:10.385389    4316 command_runner.go:130] > Addresses:
	I0514 00:18:10.385533    4316 command_runner.go:130] >   InternalIP:  172.23.102.122
	I0514 00:18:10.385533    4316 command_runner.go:130] >   Hostname:    multinode-101100
	I0514 00:18:10.385533    4316 command_runner.go:130] > Capacity:
	I0514 00:18:10.385533    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.385594    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.385594    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.385594    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.385594    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.385594    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:10.385594    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.385594    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.385594    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.385594    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.385594    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.385594    4316 command_runner.go:130] > System Info:
	I0514 00:18:10.385668    4316 command_runner.go:130] >   Machine ID:                 5110a322e7104904905e303a94b950b6
	I0514 00:18:10.385668    4316 command_runner.go:130] >   System UUID:                9b23fe4d-6d34-444b-8185-a84d51d23610
	I0514 00:18:10.385704    4316 command_runner.go:130] >   Boot ID:                    2e73d191-2dbe-4055-a17d-cff8a9e53a15
	I0514 00:18:10.385704    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:10.385704    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:10.385704    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:10.385740    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:10.385740    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:10.385740    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:10.385798    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:10.385798    4316 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0514 00:18:10.385798    4316 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0514 00:18:10.385798    4316 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0514 00:18:10.385855    4316 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:10.385855    4316 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0514 00:18:10.385855    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-xqj6w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:10.385855    4316 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4kmx4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	I0514 00:18:10.385927    4316 command_runner.go:130] >   kube-system                 etcd-multinode-101100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         75s
	I0514 00:18:10.385927    4316 command_runner.go:130] >   kube-system                 kindnet-9q2tv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0514 00:18:10.385927    4316 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-101100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	I0514 00:18:10.385965    4316 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-101100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0514 00:18:10.385965    4316 command_runner.go:130] >   kube-system                 kube-proxy-zhcz6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:10.386024    4316 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-101100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0514 00:18:10.386024    4316 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:10.386024    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:10.386024    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:10.386024    4316 command_runner.go:130] >   Resource           Requests     Limits
	I0514 00:18:10.386024    4316 command_runner.go:130] >   --------           --------     ------
	I0514 00:18:10.386080    4316 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0514 00:18:10.386080    4316 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0514 00:18:10.386080    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:10.386154    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:10.386154    4316 command_runner.go:130] > Events:
	I0514 00:18:10.386154    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:10.386154    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:10.386192    4316 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0514 00:18:10.386192    4316 command_runner.go:130] >   Normal  Starting                 72s                kube-proxy       
	I0514 00:18:10.386192    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:10.386192    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.386192    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:10.386251    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.386251    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m                kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:10.386251    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.386251    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m                kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.386313    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m                kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:10.386313    4316 command_runner.go:130] >   Normal  Starting                 22m                kubelet          Starting kubelet.
	I0514 00:18:10.386313    4316 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:10.386387    4316 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-101100 status is now: NodeReady
	I0514 00:18:10.386387    4316 command_runner.go:130] >   Normal  Starting                 81s                kubelet          Starting kubelet.
	I0514 00:18:10.386387    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.386444    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  80s (x8 over 81s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:10.386444    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    80s (x8 over 81s)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.386444    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     80s (x7 over 81s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:10.386496    4316 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:10.386496    4316 command_runner.go:130] > Name:               multinode-101100-m02
	I0514 00:18:10.386496    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:10.386496    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:10.386530    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:10.386530    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:10.386530    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m02
	I0514 00:18:10.386530    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_59_02_0700
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:10.386576    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:10.386649    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:10.386685    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:59:02 +0000
	I0514 00:18:10.386685    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:10.386685    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:10.386685    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:10.386685    4316 command_runner.go:130] > Lease:
	I0514 00:18:10.386685    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m02
	I0514 00:18:10.386745    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:10.386745    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:52 +0000
	I0514 00:18:10.386745    4316 command_runner.go:130] > Conditions:
	I0514 00:18:10.386781    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:10.386781    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:10.386781    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.386826    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.386826    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.386826    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.386826    4316 command_runner.go:130] > Addresses:
	I0514 00:18:10.386826    4316 command_runner.go:130] >   InternalIP:  172.23.109.58
	I0514 00:18:10.386826    4316 command_runner.go:130] >   Hostname:    multinode-101100-m02
	I0514 00:18:10.386899    4316 command_runner.go:130] > Capacity:
	I0514 00:18:10.386899    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.386899    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.386899    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.386934    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.386934    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.386934    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:10.386934    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.386934    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.386934    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.386980    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.386980    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.386980    4316 command_runner.go:130] > System Info:
	I0514 00:18:10.386980    4316 command_runner.go:130] >   Machine ID:                 8d348bb1bbc048f4b99c681873b42d63
	I0514 00:18:10.386980    4316 command_runner.go:130] >   System UUID:                4330851b-5248-f245-9378-5fc25e670b55
	I0514 00:18:10.386980    4316 command_runner.go:130] >   Boot ID:                    9f102be6-1468-4570-8696-97e5ce51649a
	I0514 00:18:10.386980    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:10.387052    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:10.387052    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:10.387052    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:10.387088    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:10.387088    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:10.387088    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:10.387088    4316 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0514 00:18:10.387088    4316 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0514 00:18:10.387150    4316 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0514 00:18:10.387150    4316 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:10.387150    4316 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0514 00:18:10.387188    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-q7442    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:10.387188    4316 command_runner.go:130] >   kube-system                 kindnet-2lwsm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	I0514 00:18:10.387225    4316 command_runner.go:130] >   kube-system                 kube-proxy-b25hq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0514 00:18:10.387225    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:10.387225    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:10.387225    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:10.387225    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:10.387225    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:10.387282    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:10.387282    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:10.387282    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:10.387282    4316 command_runner.go:130] > Events:
	I0514 00:18:10.387282    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:10.387282    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:10.387282    4316 command_runner.go:130] >   Normal  Starting                 18m                kube-proxy       
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  RegisteredNode           19m                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeReady                18m                kubelet          Node multinode-101100-m02 status is now: NodeReady
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeNotReady             3m38s              node-controller  Node multinode-101100-m02 status is now: NodeNotReady
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:10.387356    4316 command_runner.go:130] > Name:               multinode-101100-m03
	I0514 00:18:10.387356    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:10.387356    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m03
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_14T00_12_45_0700
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:10.387356    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:10.387356    4316 command_runner.go:130] > CreationTimestamp:  Tue, 14 May 2024 00:12:44 +0000
	I0514 00:18:10.387356    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:10.387356    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:10.387356    4316 command_runner.go:130] > Lease:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m03
	I0514 00:18:10.387356    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:10.387356    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:36 +0000
	I0514 00:18:10.387356    4316 command_runner.go:130] > Conditions:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:10.387356    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:10.387356    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.387356    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.387356    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.387356    4316 command_runner.go:130] > Addresses:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   InternalIP:  172.23.102.231
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Hostname:    multinode-101100-m03
	I0514 00:18:10.387356    4316 command_runner.go:130] > Capacity:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.387356    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.387356    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.387356    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.387356    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.387356    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.387356    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.387356    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.387356    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.387356    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.387356    4316 command_runner.go:130] > System Info:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Machine ID:                 11c3fac528de4278b1dafef49e54ea09
	I0514 00:18:10.387356    4316 command_runner.go:130] >   System UUID:                0ee228e5-87a6-0549-9a8d-1747b73431ee
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Boot ID:                    d5c1e04c-3081-4871-912e-a86507b8e24a
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:10.387356    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:10.387909    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:10.387909    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:10.387909    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:10.387949    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:10.387949    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:10.387949    4316 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0514 00:18:10.387949    4316 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0514 00:18:10.387992    4316 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0514 00:18:10.387992    4316 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:10.388024    4316 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0514 00:18:10.388051    4316 command_runner.go:130] >   kube-system                 kindnet-tfbt8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	I0514 00:18:10.388051    4316 command_runner.go:130] >   kube-system                 kube-proxy-8zsgn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	I0514 00:18:10.388051    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:10.388051    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:10.388051    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:10.388051    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:10.388051    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:10.388051    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:10.388051    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:10.388051    4316 command_runner.go:130] > Events:
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0514 00:18:10.388051    4316 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  Starting                 5m22s                  kube-proxy       
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  Starting                 14m                    kube-proxy       
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeReady                14m                    kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  Starting                 5m26s                  kubelet          Starting kubelet.
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m26s (x2 over 5m26s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m26s (x2 over 5m26s)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m26s (x2 over 5m26s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  RegisteredNode           5m23s                  node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeReady                5m21s                  kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeNotReady             3m53s                  node-controller  Node multinode-101100-m03 status is now: NodeNotReady
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  RegisteredNode           63s                    node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:10.397595    4316 logs.go:123] Gathering logs for coredns [76c5ab7859ef] ...
	I0514 00:18:10.397595    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5ab7859ef"
	I0514 00:18:10.424991    4316 command_runner.go:130] > .:53
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:10.424991    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:10.424991    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 127.0.0.1:57161 - 45698 "HINFO IN 8990392176501838712.5889638972791529478. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051692136s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:55099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211505s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:55878 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185519855s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:33619 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15684109s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:49440 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.197645067s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:50960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430608s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:46839 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000167103s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:55330 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000155803s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:50874 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000131802s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:53724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096802s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.042707366s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:54429 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000269706s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:48558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000262605s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:46986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023487677s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:60460 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174903s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:60672 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204304s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:36311 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110402s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:43910 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301006s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:52495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000145803s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:46357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066702s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:41390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062301s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:35739 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084301s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:44800 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163303s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:57631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068702s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:50842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135702s
	I0514 00:18:10.425547    4316 command_runner.go:130] > [INFO] 10.244.1.2:41210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204604s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:57858 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073801s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:48782 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152303s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:36081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121002s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:46909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115002s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:36030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220205s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:56187 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059401s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:51500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099802s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:57247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147903s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:46132 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170203s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:57206 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000452309s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:44795 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000146203s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:33385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082102s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:56742 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173704s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:46927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185904s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:42956 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000054801s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0514 00:18:10.428888    4316 logs.go:123] Gathering logs for kube-scheduler [964887fc5d36] ...
	I0514 00:18:10.428888    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964887fc5d36"
	I0514 00:18:10.452924    4316 command_runner.go:130] ! I0513 23:56:04.693680       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:10.453023    4316 command_runner.go:130] ! W0513 23:56:06.133341       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:10.453023    4316 command_runner.go:130] ! W0513 23:56:06.133396       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.453069    4316 command_runner.go:130] ! W0513 23:56:06.133407       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.133415       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.170291       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.170533       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.174536       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.174684       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.174703       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.174918       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.182722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.186053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.183583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.183698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.183781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.183835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.183868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.184039       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.186929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.186969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.187026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.188647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.188112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.188121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.188233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.188242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.189252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.189533       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.189643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453621    4316 command_runner.go:130] ! E0513 23:56:06.189773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453663    4316 command_runner.go:130] ! W0513 23:56:06.190106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:10.453663    4316 command_runner.go:130] ! E0513 23:56:06.190324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:10.453698    4316 command_runner.go:130] ! W0513 23:56:06.190538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:10.453733    4316 command_runner.go:130] ! E0513 23:56:06.191036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:10.453761    4316 command_runner.go:130] ! W0513 23:56:06.191581       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:10.453761    4316 command_runner.go:130] ! E0513 23:56:06.192160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:10.453761    4316 command_runner.go:130] ! W0513 23:56:06.191626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453843    4316 command_runner.go:130] ! E0513 23:56:06.192721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453843    4316 command_runner.go:130] ! W0513 23:56:06.190821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:10.453890    4316 command_runner.go:130] ! E0513 23:56:06.193134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:10.453890    4316 command_runner.go:130] ! W0513 23:56:07.154218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:10.453930    4316 command_runner.go:130] ! E0513 23:56:07.155376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:10.453965    4316 command_runner.go:130] ! W0513 23:56:07.229548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:10.454003    4316 command_runner.go:130] ! E0513 23:56:07.229613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.344429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.344853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.410556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.410716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.423084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.423126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.467897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.467939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.484903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.485019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.545758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.546087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.573884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.573980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.633780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.633901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.680821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.680938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.704130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.704357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.736914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.737079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.754367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:10.454555    4316 command_runner.go:130] ! E0513 23:56:07.754798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:10.454555    4316 command_runner.go:130] ! I0513 23:56:09.676327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:10.454605    4316 command_runner.go:130] ! E0514 00:14:35.689344       1 run.go:74] "command failed" err="finished without leader elect"
	I0514 00:18:10.465686    4316 logs.go:123] Gathering logs for kube-proxy [b2a1b31cd7de] ...
	I0514 00:18:10.465686    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a1b31cd7de"
	I0514 00:18:10.489542    4316 command_runner.go:130] ! I0514 00:16:57.528613       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:10.489749    4316 command_runner.go:130] ! I0514 00:16:57.562847       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.122"]
	I0514 00:18:10.489749    4316 command_runner.go:130] ! I0514 00:16:57.701301       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:10.489749    4316 command_runner.go:130] ! I0514 00:16:57.701447       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:10.489749    4316 command_runner.go:130] ! I0514 00:16:57.701476       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.708219       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.708800       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.708841       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.712422       1 config.go:192] "Starting service config controller"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.712733       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.712780       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.712824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.715339       1 config.go:319] "Starting node config controller"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.717651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.815732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.815811       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.818050       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:10.491666    4316 logs.go:123] Gathering logs for kube-proxy [91edaaa00da2] ...
	I0514 00:18:10.491754    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91edaaa00da2"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.901713       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.929714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.106.39"]
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.982680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.982795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.982816       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.988669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.989566       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.989671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.992700       1 config.go:192] "Starting service config controller"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.993131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.993327       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.993339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.994714       1 config.go:319] "Starting node config controller"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.994744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:25.094420       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:25.094530       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:25.094981       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:10.518267    4316 logs.go:123] Gathering logs for kube-controller-manager [e96f94398d6d] ...
	I0514 00:18:10.518267    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e96f94398d6d"
	I0514 00:18:10.548103    4316 command_runner.go:130] ! I0513 23:56:04.448604       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:10.549011    4316 command_runner.go:130] ! I0513 23:56:04.932336       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:10.549011    4316 command_runner.go:130] ! I0513 23:56:04.932378       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:04.934044       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:04.934133       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:04.934796       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:04.935005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:09.124957       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:09.125092       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:09.140996       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:09.141447       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:09.141567       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:10.549248    4316 command_runner.go:130] ! I0513 23:56:09.156847       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:10.549248    4316 command_runner.go:130] ! I0513 23:56:09.157241       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:10.549248    4316 command_runner.go:130] ! I0513 23:56:09.157455       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:10.549248    4316 command_runner.go:130] ! I0513 23:56:09.170795       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.171005       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.171684       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.171921       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.172144       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.183975       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.184362       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:10.549466    4316 command_runner.go:130] ! I0513 23:56:09.185233       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:10.549466    4316 command_runner.go:130] ! I0513 23:56:09.230173       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:10.549466    4316 command_runner.go:130] ! I0513 23:56:09.242679       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:10.549466    4316 command_runner.go:130] ! I0513 23:56:09.242735       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:10.549574    4316 command_runner.go:130] ! I0513 23:56:09.242821       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:10.549574    4316 command_runner.go:130] ! I0513 23:56:09.249513       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:10.549574    4316 command_runner.go:130] ! I0513 23:56:09.249614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:10.549660    4316 command_runner.go:130] ! I0513 23:56:09.249731       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:10.549660    4316 command_runner.go:130] ! I0513 23:56:09.249824       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:10.549743    4316 command_runner.go:130] ! I0513 23:56:09.249912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:10.549743    4316 command_runner.go:130] ! I0513 23:56:09.250132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:10.549743    4316 command_runner.go:130] ! I0513 23:56:09.250216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:10.549832    4316 command_runner.go:130] ! I0513 23:56:09.250270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:10.549832    4316 command_runner.go:130] ! I0513 23:56:09.250425       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:10.549832    4316 command_runner.go:130] ! I0513 23:56:09.250604       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:10.549918    4316 command_runner.go:130] ! I0513 23:56:09.250656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:10.549918    4316 command_runner.go:130] ! I0513 23:56:09.250695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:10.549918    4316 command_runner.go:130] ! I0513 23:56:09.250745       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:10.550010    4316 command_runner.go:130] ! I0513 23:56:09.250794       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:10.550010    4316 command_runner.go:130] ! I0513 23:56:09.250851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:10.550010    4316 command_runner.go:130] ! I0513 23:56:09.250883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:10.550010    4316 command_runner.go:130] ! I0513 23:56:09.250994       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:10.550110    4316 command_runner.go:130] ! I0513 23:56:09.251028       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:10.550133    4316 command_runner.go:130] ! I0513 23:56:09.251909       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:10.550133    4316 command_runner.go:130] ! I0513 23:56:09.251999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:10.550133    4316 command_runner.go:130] ! I0513 23:56:09.252142       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:10.550133    4316 command_runner.go:130] ! I0513 23:56:09.305089       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:10.550218    4316 command_runner.go:130] ! I0513 23:56:09.305302       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:10.550218    4316 command_runner.go:130] ! I0513 23:56:09.305357       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:10.550218    4316 command_runner.go:130] ! I0513 23:56:09.305376       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:10.550301    4316 command_runner.go:130] ! I0513 23:56:09.321907       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:10.550301    4316 command_runner.go:130] ! I0513 23:56:09.322244       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:10.550301    4316 command_runner.go:130] ! I0513 23:56:09.322270       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:10.550301    4316 command_runner.go:130] ! I0513 23:56:09.324160       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:10.550301    4316 command_runner.go:130] ! I0513 23:56:09.324208       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:10.550392    4316 command_runner.go:130] ! E0513 23:56:09.334850       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:10.550392    4316 command_runner.go:130] ! I0513 23:56:09.335135       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:10.550478    4316 command_runner.go:130] ! I0513 23:56:09.346530       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:10.550478    4316 command_runner.go:130] ! I0513 23:56:09.346809       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:10.550478    4316 command_runner.go:130] ! I0513 23:56:09.346883       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:10.550478    4316 command_runner.go:130] ! I0513 23:56:09.385297       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:10.550564    4316 command_runner.go:130] ! I0513 23:56:09.385391       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:10.550564    4316 command_runner.go:130] ! I0513 23:56:09.385403       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:10.550564    4316 command_runner.go:130] ! I0513 23:56:09.542113       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:10.550564    4316 command_runner.go:130] ! I0513 23:56:09.542271       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:10.550654    4316 command_runner.go:130] ! I0513 23:56:09.542284       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:10.550654    4316 command_runner.go:130] ! I0513 23:56:09.581300       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:10.550654    4316 command_runner.go:130] ! I0513 23:56:09.581321       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:10.550742    4316 command_runner.go:130] ! I0513 23:56:09.581454       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.550742    4316 command_runner.go:130] ! I0513 23:56:09.581971       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:10.550742    4316 command_runner.go:130] ! I0513 23:56:09.582008       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:10.550742    4316 command_runner.go:130] ! I0513 23:56:09.582030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.550833    4316 command_runner.go:130] ! I0513 23:56:09.582896       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:10.550833    4316 command_runner.go:130] ! I0513 23:56:09.582908       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:10.550833    4316 command_runner.go:130] ! I0513 23:56:09.582922       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.550833    4316 command_runner.go:130] ! I0513 23:56:09.583436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:10.550926    4316 command_runner.go:130] ! I0513 23:56:09.583678       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:10.550926    4316 command_runner.go:130] ! I0513 23:56:09.583691       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:10.550926    4316 command_runner.go:130] ! I0513 23:56:09.583727       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.734073       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.734159       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.734446       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.885354       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.885756       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.885934       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:10.551134    4316 command_runner.go:130] ! I0513 23:56:10.040288       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:10.551134    4316 command_runner.go:130] ! I0513 23:56:10.040486       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:10.551134    4316 command_runner.go:130] ! I0513 23:56:20.090311       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:10.551224    4316 command_runner.go:130] ! I0513 23:56:20.090418       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:10.551224    4316 command_runner.go:130] ! I0513 23:56:20.090428       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:10.551224    4316 command_runner.go:130] ! I0513 23:56:20.090911       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:10.551224    4316 command_runner.go:130] ! I0513 23:56:20.091093       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:10.551224    4316 command_runner.go:130] ! I0513 23:56:20.101598       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:10.551294    4316 command_runner.go:130] ! I0513 23:56:20.101778       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:10.551294    4316 command_runner.go:130] ! I0513 23:56:20.101805       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:10.551294    4316 command_runner.go:130] ! I0513 23:56:20.114509       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:10.551294    4316 command_runner.go:130] ! I0513 23:56:20.114580       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:10.551365    4316 command_runner.go:130] ! I0513 23:56:20.114849       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:10.551365    4316 command_runner.go:130] ! I0513 23:56:20.114678       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:10.551365    4316 command_runner.go:130] ! I0513 23:56:20.115038       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:10.551436    4316 command_runner.go:130] ! I0513 23:56:20.115048       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:10.551436    4316 command_runner.go:130] ! E0513 23:56:20.117646       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:10.551436    4316 command_runner.go:130] ! I0513 23:56:20.117865       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:10.551436    4316 command_runner.go:130] ! I0513 23:56:20.130498       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:10.551506    4316 command_runner.go:130] ! I0513 23:56:20.130711       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:10.551506    4316 command_runner.go:130] ! I0513 23:56:20.130932       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:10.551506    4316 command_runner.go:130] ! I0513 23:56:20.143035       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:10.551506    4316 command_runner.go:130] ! I0513 23:56:20.143414       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:10.551582    4316 command_runner.go:130] ! I0513 23:56:20.143607       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:10.551582    4316 command_runner.go:130] ! I0513 23:56:20.160023       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:10.551582    4316 command_runner.go:130] ! I0513 23:56:20.160191       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:10.551582    4316 command_runner.go:130] ! I0513 23:56:20.160215       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:10.551582    4316 command_runner.go:130] ! I0513 23:56:20.170613       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:10.551659    4316 command_runner.go:130] ! I0513 23:56:20.170951       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:10.551659    4316 command_runner.go:130] ! I0513 23:56:20.171064       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:10.551659    4316 command_runner.go:130] ! I0513 23:56:20.179840       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:10.551659    4316 command_runner.go:130] ! I0513 23:56:20.180447       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:10.551746    4316 command_runner.go:130] ! I0513 23:56:20.180590       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:10.551746    4316 command_runner.go:130] ! I0513 23:56:20.190977       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:10.551746    4316 command_runner.go:130] ! I0513 23:56:20.191286       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:10.551746    4316 command_runner.go:130] ! I0513 23:56:20.191448       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:10.551830    4316 command_runner.go:130] ! I0513 23:56:20.204888       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:10.551830    4316 command_runner.go:130] ! I0513 23:56:20.205578       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:10.551830    4316 command_runner.go:130] ! I0513 23:56:20.205670       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:10.551830    4316 command_runner.go:130] ! I0513 23:56:20.239034       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:10.551830    4316 command_runner.go:130] ! I0513 23:56:20.239193       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:10.551909    4316 command_runner.go:130] ! I0513 23:56:20.239262       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:10.551909    4316 command_runner.go:130] ! I0513 23:56:20.482568       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:10.551909    4316 command_runner.go:130] ! I0513 23:56:20.486046       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:10.551909    4316 command_runner.go:130] ! I0513 23:56:20.486073       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:10.551909    4316 command_runner.go:130] ! I0513 23:56:20.486093       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:20.786163       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:20.786358       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.082938       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.083657       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.083743       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.238006       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.238099       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.238152       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.238163       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.283674       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.283751       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.283986       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.284217       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.442664       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.442840       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.442854       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.587997       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.588249       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.588322       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.740205       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.740392       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.740547       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.889738       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.890053       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.890145       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.038114       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.038197       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.038216       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.038314       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.038329       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.291303       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.291332       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.291999       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.299124       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.317101       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.321553       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.322540       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.335837       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.339493       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.339494       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.339605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.340940       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.341044       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.342309       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.343675       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.343828       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.347539       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.357773       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.361377       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.372019       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.380620       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.382092       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.382250       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.382979       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.384565       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.384604       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.384724       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.386009       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.386117       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.386299       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.389103       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.390596       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.391278       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.391538       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.391663       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.392031       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:10.553258    4316 command_runner.go:130] ! I0513 23:56:22.392207       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:10.553258    4316 command_runner.go:130] ! I0513 23:56:22.392242       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:10.553258    4316 command_runner.go:130] ! I0513 23:56:22.392249       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:10.553258    4316 command_runner.go:130] ! I0513 23:56:22.402105       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:10.553335    4316 command_runner.go:130] ! I0513 23:56:22.405500       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:10.553335    4316 command_runner.go:130] ! I0513 23:56:22.406927       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:10.553335    4316 command_runner.go:130] ! I0513 23:56:22.411111       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100" podCIDRs=["10.244.0.0/24"]
	I0514 00:18:10.553335    4316 command_runner.go:130] ! I0513 23:56:22.431075       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:10.553405    4316 command_runner.go:130] ! I0513 23:56:22.443663       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:10.553405    4316 command_runner.go:130] ! I0513 23:56:22.552382       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:10.553405    4316 command_runner.go:130] ! I0513 23:56:22.573274       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:10.553405    4316 command_runner.go:130] ! I0513 23:56:22.573442       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:10.553471    4316 command_runner.go:130] ! I0513 23:56:22.573935       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:10.553471    4316 command_runner.go:130] ! I0513 23:56:22.574179       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:10.553471    4316 command_runner.go:130] ! I0513 23:56:22.586849       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:10.553471    4316 command_runner.go:130] ! I0513 23:56:22.602574       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:10.553543    4316 command_runner.go:130] ! I0513 23:56:23.018846       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:10.553543    4316 command_runner.go:130] ! I0513 23:56:23.087540       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:10.553625    4316 command_runner.go:130] ! I0513 23:56:23.087631       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:10.553625    4316 command_runner.go:130] ! I0513 23:56:23.691681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="593.37356ms"
	I0514 00:18:10.553625    4316 command_runner.go:130] ! I0513 23:56:23.736584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.765409ms"
	I0514 00:18:10.553625    4316 command_runner.go:130] ! I0513 23:56:23.736691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.105µs"
	I0514 00:18:10.553716    4316 command_runner.go:130] ! I0513 23:56:23.741069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.307µs"
	I0514 00:18:10.553716    4316 command_runner.go:130] ! I0513 23:56:24.558346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.410112ms"
	I0514 00:18:10.553716    4316 command_runner.go:130] ! I0513 23:56:24.599621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.388659ms"
	I0514 00:18:10.553793    4316 command_runner.go:130] ! I0513 23:56:24.599778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.705µs"
	I0514 00:18:10.553793    4316 command_runner.go:130] ! I0513 23:56:35.460855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.604µs"
	I0514 00:18:10.553793    4316 command_runner.go:130] ! I0513 23:56:35.495875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.404µs"
	I0514 00:18:10.553793    4316 command_runner.go:130] ! I0513 23:56:36.868700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.505µs"
	I0514 00:18:10.553865    4316 command_runner.go:130] ! I0513 23:56:36.916603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.935352ms"
	I0514 00:18:10.553865    4316 command_runner.go:130] ! I0513 23:56:36.917123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.803µs"
	I0514 00:18:10.553865    4316 command_runner.go:130] ! I0513 23:56:37.577172       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:10.553932    4316 command_runner.go:130] ! I0513 23:59:02.230067       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:10.553932    4316 command_runner.go:130] ! I0513 23:59:02.246355       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m02" podCIDRs=["10.244.1.0/24"]
	I0514 00:18:10.553932    4316 command_runner.go:130] ! I0513 23:59:02.603699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:10.554002    4316 command_runner.go:130] ! I0513 23:59:22.527169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554002    4316 command_runner.go:130] ! I0513 23:59:45.791856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.887671ms"
	I0514 00:18:10.554002    4316 command_runner.go:130] ! I0513 23:59:45.808219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.096894ms"
	I0514 00:18:10.554071    4316 command_runner.go:130] ! I0513 23:59:45.808747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.005µs"
	I0514 00:18:10.554071    4316 command_runner.go:130] ! I0513 23:59:45.809833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.705µs"
	I0514 00:18:10.554071    4316 command_runner.go:130] ! I0513 23:59:45.811263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.604µs"
	I0514 00:18:10.554071    4316 command_runner.go:130] ! I0513 23:59:48.526617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.926472ms"
	I0514 00:18:10.554071    4316 command_runner.go:130] ! I0513 23:59:48.529326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.302µs"
	I0514 00:18:10.554175    4316 command_runner.go:130] ! I0513 23:59:48.555529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.972453ms"
	I0514 00:18:10.554195    4316 command_runner.go:130] ! I0513 23:59:48.556317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.601µs"
	I0514 00:18:10.554195    4316 command_runner.go:130] ! I0514 00:03:17.563212       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554195    4316 command_runner.go:130] ! I0514 00:03:17.565297       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:10.554266    4316 command_runner.go:130] ! I0514 00:03:17.579900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.2.0/24"]
	I0514 00:18:10.554266    4316 command_runner.go:130] ! I0514 00:03:17.665892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:10.554266    4316 command_runner.go:130] ! I0514 00:03:38.035898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554350    4316 command_runner.go:130] ! I0514 00:10:17.797191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554350    4316 command_runner.go:130] ! I0514 00:12:39.070271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554350    4316 command_runner.go:130] ! I0514 00:12:44.527915       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554434    4316 command_runner.go:130] ! I0514 00:12:44.528275       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:10.554434    4316 command_runner.go:130] ! I0514 00:12:44.543895       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.3.0/24"]
	I0514 00:18:10.554434    4316 command_runner.go:130] ! I0514 00:12:49.983419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554513    4316 command_runner.go:130] ! I0514 00:14:17.920991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554538    4316 command_runner.go:130] ! I0514 00:14:33.013074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.740609ms"
	I0514 00:18:10.554569    4316 command_runner.go:130] ! I0514 00:14:33.013918       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.506µs"
	I0514 00:18:10.569999    4316 logs.go:123] Gathering logs for kindnet [2b424a7cd98c] ...
	I0514 00:18:10.569999    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b424a7cd98c"
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:28.349800       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:28.349935       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:10.593766    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:28.441282       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:28.441413       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:28.441441       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:29.045047       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:29.045110       1 main.go:227] handling current node
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:29.045545       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:29.045580       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:10.594304    4316 command_runner.go:130] ! I0514 00:17:29.045839       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.109.58 Flags: [] Table: 0} 
	I0514 00:18:10.594304    4316 command_runner.go:130] ! I0514 00:17:29.045983       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:29.045993       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:29.046039       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.102.231 Flags: [] Table: 0} 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.055904       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.056127       1 main.go:227] handling current node
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.056141       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.056155       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.056412       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.056502       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062369       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062453       1 main.go:227] handling current node
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062465       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062483       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062816       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062843       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075229       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075506       1 main.go:227] handling current node
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075588       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075650       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075827       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075835       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.090534       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.090748       1 main.go:227] handling current node
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.090769       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.090777       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.091233       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.091328       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:10.598592    4316 logs.go:123] Gathering logs for Docker ...
	I0514 00:18:10.598694    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0514 00:18:10.620939    4316 command_runner.go:130] > May 14 00:15:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:10.620939    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:10.620939    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:10.620939    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:10.620939    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:10.621882    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:10.621882    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:10.621882    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:10.621962    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:10.622183    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:10.622183    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:10.622183    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.622259    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0514 00:18:10.622259    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.622259    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:10.622259    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:10.622259    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:10.622335    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:10.622386    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:10.622386    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:10.622420    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:10.622449    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.622449    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0514 00:18:10.622449    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.622510    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0514 00:18:10.622510    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:10.622510    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.622572    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:10.622572    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349024460Z" level=info msg="Starting up"
	I0514 00:18:10.622572    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349886331Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:10.622623    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.351031392Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0514 00:18:10.622657    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.380428255Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:10.622657    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407060046Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:10.622703    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407104860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:10.622703    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407157277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:10.622734    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407182685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.622781    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408093872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.622781    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408200005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.622812    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408421875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.622859    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408522107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.622859    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408552116Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:10.622914    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408565820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.622986    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409126597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.623018    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409855027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.623018    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412841968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.623018    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412982412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.623547    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413109352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.623588    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413195779Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:10.623681    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414192994Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:10.623728    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414303628Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:10.623728    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414321234Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:10.623768    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420644226Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:10.623856    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420793973Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:10.623902    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420815380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:10.623942    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420835086Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:10.623942    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420849391Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:10.623991    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421006640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:10.624030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421303834Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:10.624077    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421395163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:10.624118    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421479890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:10.624118    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421494994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:10.624204    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421507198Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624250    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421523703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624290    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421540509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624290    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421554613Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624338    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421571518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624377    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421584022Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624424    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421594526Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.627440    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421604629Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.628010    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421626336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628010    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421639040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628062    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421651344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628062    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421662947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421673350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421684554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421695257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421705961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421717564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421730268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421774782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421787286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421797990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421811094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421828299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421838703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421849206Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421898721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421926330Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421987549Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:10.628684    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422004755Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:10.628762    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422070276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628808    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422106987Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422118891Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422453196Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422571233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422619148Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422687970Z" level=info msg="containerd successfully booted in 0.044863s"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.404653025Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.578701970Z" level=info msg="Loading containers: start."
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.027152626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.105905244Z" level=info msg="Loading containers: done."
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.135340666Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.136139953Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.185948604Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.186071317Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 systemd[1]: Stopping Docker Application Container Engine...
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.988898314Z" level=info msg="Processing signal 'terminated'"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.989838579Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990583130Z" level=info msg="Daemon shutdown complete"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990661536Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990696238Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: docker.service: Deactivated successfully.
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: Stopped Docker Application Container Engine.
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.059729298Z" level=info msg="Starting up"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.060541955Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:10.629377    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.061850245Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1055
	I0514 00:18:10.629417    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.092613476Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:10.629466    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115368453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:10.629466    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115403155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115435257Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115450359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115473760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115486261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115635771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115738478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115756280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115766280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115789882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.116031099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119790059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119888566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120181886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120287794Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:10.630014    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120385900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:10.630103    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120406702Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:10.630154    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120419603Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120713023Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120746825Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120760126Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120773227Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120785328Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120826831Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120999543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121054147Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121092049Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121102050Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121115951Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121126152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121135052Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121145153Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121156354Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121165854Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121175255Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121184656Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121204657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121216358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121225759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121235159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121243960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121254361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121263161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121275762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121287763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121299564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121364668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121378369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121388070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121400871Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121421772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121432873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.631124    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121442174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:10.631172    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121474076Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:10.631172    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121485477Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:10.631252    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121493977Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:10.631299    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121504178Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:10.631332    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121548581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.631412    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121558382Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:10.631459    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121570783Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:10.631459    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121732894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:10.631491    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121765696Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:10.631540    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121795498Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:10.631580    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121808099Z" level=info msg="containerd successfully booted in 0.031442s"
	I0514 00:18:10.631626    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.110784113Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:10.631626    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.142577516Z" level=info msg="Loading containers: start."
	I0514 00:18:10.631658    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.405628939Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:10.631658    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.480865351Z" level=info msg="Loading containers: done."
	I0514 00:18:10.631709    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503621028Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:10.631741    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503703734Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:10.631741    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545253312Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:10.631782    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545312016Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:10.631782    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:10.631814    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:10.631814    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:10.631855    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:10.631887    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:10.631887    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0514 00:18:10.631929    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Loaded network plugin cni"
	I0514 00:18:10.631929    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start cri-dockerd grpc backend"
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xqj6w_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2\""
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4kmx4_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697\""
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717439407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717535614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717551915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.718214261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720663031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720923549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721017455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721295774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783128658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783344773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783450280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783657895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816093342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816151946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816166547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816251853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ddcaadef980aca40a7740fe7c59949c3cb803d9fb441eca155b02162f3422bb8/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633051    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/659643d47b9ae231a8b97d9871cab6dfac5f6d06e647c919d14170832ee47683/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633090    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633104    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633104    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.258935521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633169    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.259980593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633208    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260187008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633208    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260361520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633270    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272553064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633270    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272771779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633312    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272798781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633342    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272907589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633342    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314782590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633382    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314905098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633412    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314946601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633451    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.315263523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633480    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.385829312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633480    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386016625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633557    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386135333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633922    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386495758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633922    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:55Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0514 00:18:10.633964    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444453862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444531867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444549969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444647976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.461909471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462106685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462142187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462265196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492511091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492965923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493135035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493390352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8ac60a565998ca52581e38272f2fcdb5f7038023f93d728cd74f5b89f5593ed/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/468a0e2976ae45a571a99afabfcd1329c76873e973179fe56cc9ef46e2533698/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849392115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849539826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849623331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849861048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857219658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857468675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857687390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.634517    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.858016113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.634517    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5233e076edceb93931d756579982e556959dfd31508760da215a8407dca14e56/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.634547    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218178264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.634616    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218325574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.634616    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218348976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.634661    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218459383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.634691    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:17.430189771Z" level=info msg="ignoring event" container=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:10.634942    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431460316Z" level=info msg="shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:10.634988    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431869631Z" level=warning msg="cleaning up after shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:10.635020    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.432007736Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:10.635061    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:27.281698284Z" level=info msg="ignoring event" container=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:10.635093    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.282877145Z" level=info msg="shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:10.635143    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283000451Z" level=warning msg="cleaning up after shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:10.635175    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283015352Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:10.635258    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.098999177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.635258    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099271791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.635258    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099326694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.635784    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099641511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.635824    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.092603581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.635824    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093732951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.635824    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093768053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.635875    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.095427255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.635915    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235051362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.635955    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235156269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.635955    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235169170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.635994    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235258576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636036    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235645702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.636068    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235713507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.636110    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235730808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636141    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235828014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636141    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.636214    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0514 00:18:10.636285    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743900500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.636827    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743970305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.636860    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.744406335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636899    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.745139484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636930    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808545660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.636930    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808756974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.636998    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808962988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636998    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.809189903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.637036    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637066    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637066    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637111    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637142    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637190    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637190    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637259    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637259    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637289    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637338    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637368    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637737    4316 command_runner.go:130] > May 14 00:18:06 multinode-101100 dockerd[1049]: 2024/05/14 00:18:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637737    4316 command_runner.go:130] > May 14 00:18:06 multinode-101100 dockerd[1049]: 2024/05/14 00:18:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637779    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.668115    4316 logs.go:123] Gathering logs for kubelet ...
	I0514 00:18:10.668115    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507609    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507660    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.508230    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: E0514 00:16:46.508906    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229791    1441 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229941    1441 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.230764    1441 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: E0514 00:16:47.231303    1441 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717000    1520 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717452    1520 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717850    1520 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.719747    1520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.734764    1520 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754342    1520 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754443    1520 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755707    1520 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755788    1520 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-101100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0514 00:18:10.697636    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756671    1520 topology_manager.go:138] "Creating topology manager with none policy"
	I0514 00:18:10.697674    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756747    1520 container_manager_linux.go:301] "Creating device plugin manager"
	I0514 00:18:10.697674    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.757344    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:10.697674    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.758885    1520 kubelet.go:400] "Attempting to sync node with API server"
	I0514 00:18:10.697721    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759591    1520 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0514 00:18:10.697750    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759727    1520 kubelet.go:312] "Adding apiserver pod source"
	I0514 00:18:10.697750    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.760630    1520 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0514 00:18:10.697776    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.765370    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.697831    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.765512    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.697857    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.767039    1520 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0514 00:18:10.697857    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.771297    1520 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0514 00:18:10.697895    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.771834    1520 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0514 00:18:10.697895    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.773545    1520 server.go:1264] "Started kubelet"
	I0514 00:18:10.697925    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.773829    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.697964    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.774013    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.697994    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.780360    1520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.102.122:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-101100.17cf32c62bf0274b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-101100,UID:multinode-101100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-101100,},FirstTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,LastTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-1
01100,}"
	I0514 00:18:10.698042    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.781297    1520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0514 00:18:10.698077    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.786484    1520 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0514 00:18:10.698109    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.787784    1520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0514 00:18:10.698109    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.792005    1520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0514 00:18:10.698146    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.800317    1520 server.go:455] "Adding debug handlers to kubelet server"
	I0514 00:18:10.698146    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805202    1520 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0514 00:18:10.698179    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805290    1520 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0514 00:18:10.698179    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812186    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="200ms"
	I0514 00:18:10.698216    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.812333    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.698279    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812369    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.698279    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816781    1520 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0514 00:18:10.698319    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816881    1520 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0514 00:18:10.698319    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816892    1520 factory.go:221] Registration of the systemd container factory successfully
	I0514 00:18:10.698366    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849206    1520 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0514 00:18:10.698366    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849426    1520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0514 00:18:10.698366    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849585    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:10.698406    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850764    1520 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0514 00:18:10.698406    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850799    1520 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0514 00:18:10.698406    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850826    1520 policy_none.go:49] "None policy: Start"
	I0514 00:18:10.698455    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.855604    1520 reconciler.go:26] "Reconciler: start to sync state"
	I0514 00:18:10.698455    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884024    1520 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0514 00:18:10.698494    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884165    1520 state_mem.go:35] "Initializing new in-memory state store"
	I0514 00:18:10.698494    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.886215    1520 state_mem.go:75] "Updated machine memory state"
	I0514 00:18:10.698494    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888657    1520 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0514 00:18:10.698494    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888839    1520 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0514 00:18:10.698547    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.891306    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0514 00:18:10.698584    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.897961    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0514 00:18:10.698584    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898040    1520 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0514 00:18:10.698613    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898088    1520 kubelet.go:2337] "Starting kubelet main sync loop"
	I0514 00:18:10.698613    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.898127    1520 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0514 00:18:10.698648    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898551    1520 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0514 00:18:10.698681    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.899218    1520 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-101100\" not found"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.900215    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.900324    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.907443    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.909152    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.912132    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999139    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7c140951f4f8270da243f55135e9f108f3cdf5ef11a4e990e06822ace5adbd"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999762    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d7537422a83c9a57ab3bed978e87441e2725a75ebc91f5cad3319d11d4ea18"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999846    1520 topology_manager.go:215] "Topology Admit Handler" podUID="378d61cf78af695f1df41e321907a84d" podNamespace="kube-system" podName="kube-apiserver-multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.000880    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5393de2704b2efef461d22fa52aa93c8" podNamespace="kube-system" podName="kube-controller-manager-multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.002201    1520 topology_manager.go:215] "Topology Admit Handler" podUID="8083abd658221f47cabf81a00c4ca98e" podNamespace="kube-system" podName="kube-scheduler-multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.004707    1520 topology_manager.go:215] "Topology Admit Handler" podUID="62d8afc7714e8ab65bff9675d120bb67" podNamespace="kube-system" podName="etcd-multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007687    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb3b27edcd2a44b67fad4a74f438a62eec78b20422f6f952396053574dfb97e"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007796    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da9268fd6556bae4d0109c5065588160bcf737c35e1e5df738d31786425c22ff"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007891    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd694480978f356b61313108a6ff716a8d5f6e854fea1e4aa89a76a68d049f0"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007938    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287e744a4dc2e511f4e40696c7d3b4193896c0c40a5bb527e569d1d3ec2cb908"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.013966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad0550a5dabf16106fc2956251a65bccdc32f3f3be1f27246f675964fd548a1f"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.014759    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="400ms"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.031437    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.048649    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074775    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-ca-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:10.699268    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074859    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:10.699268    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074906    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-k8s-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.699348    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074943    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.699348    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074981    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-certs\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:10.699398    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075015    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-data\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:10.699427    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075045    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-k8s-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:10.699427    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075248    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-ca-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.699484    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075285    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-flexvolume-dir\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.699514    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075316    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-kubeconfig\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.699574    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075345    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8083abd658221f47cabf81a00c4ca98e-kubeconfig\") pod \"kube-scheduler-multinode-101100\" (UID: \"8083abd658221f47cabf81a00c4ca98e\") " pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:10.699574    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.111262    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.112979    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.416229    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="800ms"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.515338    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.516940    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: W0514 00:16:50.730920    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.730993    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.074200    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.074270    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.076835    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.081775    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.081938    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.108133    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.218458    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="1.6s"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.318715    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.319804    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:10.700124    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.367337    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.700163    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.367409    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.700163    4316 command_runner.go:130] > May 14 00:16:52 multinode-101100 kubelet[1520]: I0514 00:16:52.921237    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:10.700211    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086028    1520 kubelet_node_status.go:112] "Node was previously registered" node="multinode-101100"
	I0514 00:18:10.700211    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.086698    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-101100\" already exists" pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.700251    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086743    1520 kubelet_node_status.go:76] "Successfully registered node" node="multinode-101100"
	I0514 00:18:10.700251    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.088971    1520 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0514 00:18:10.700299    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.090614    1520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0514 00:18:10.700299    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.091996    1520 setters.go:580] "Node became not ready" node="multinode-101100" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-14T00:16:55Z","lastTransitionTime":"2024-05-14T00:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0514 00:18:10.700339    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.783435    1520 apiserver.go:52] "Watching apiserver"
	I0514 00:18:10.700339    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788503    1520 topology_manager.go:215] "Topology Admit Handler" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4kmx4"
	I0514 00:18:10.700387    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788795    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0" podNamespace="kube-system" podName="kindnet-9q2tv"
	I0514 00:18:10.700387    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788932    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a9a488af-41ba-47f3-87b0-5a2f062afad6" podNamespace="kube-system" podName="kube-proxy-zhcz6"
	I0514 00:18:10.700427    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789028    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247" podNamespace="kube-system" podName="storage-provisioner"
	I0514 00:18:10.700427    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789184    1520 topology_manager.go:215] "Topology Admit Handler" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae" podNamespace="default" podName="busybox-fc5497c4f-xqj6w"
	I0514 00:18:10.700515    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.789553    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.700515    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789850    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-101100" podUID="1d9c79a4-1e4a-46fb-b3e8-02a4775f40af"
	I0514 00:18:10.700562    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.790329    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-101100" podUID="cd31d030-75f8-4abb-bcad-34031cec7aa6"
	I0514 00:18:10.700602    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.794088    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.700602    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.798934    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-multinode-101100\" already exists" pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:10.700650    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.809466    1520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0514 00:18:10.700650    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.835196    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-101100"
	I0514 00:18:10.700689    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857783    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-cni-cfg\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:10.700736    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857845    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-xtables-lock\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:10.700776    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857866    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-xtables-lock\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:10.700824    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857954    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-lib-modules\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:10.700824    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858020    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a92f04b8-a93f-42d8-81d7-d4da6bf2e247-tmp\") pod \"storage-provisioner\" (UID: \"a92f04b8-a93f-42d8-81d7-d4da6bf2e247\") " pod="kube-system/storage-provisioner"
	I0514 00:18:10.700866    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858051    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-lib-modules\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:10.700866    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859176    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.700953    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859325    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.359260421 +0000 UTC m=+6.710289036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.701000    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.873841    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:10.701000    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.907826    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d9b35578220c9e99f77722d9aa294f" path="/var/lib/kubelet/pods/03d9b35578220c9e99f77722d9aa294f/volumes"
	I0514 00:18:10.701040    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.910490    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af4b764a5249ff25d3c1c709387c273" path="/var/lib/kubelet/pods/1af4b764a5249ff25d3c1c709387c273/volumes"
	I0514 00:18:10.701040    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917375    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701087    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917415    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701126    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917466    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.417450852 +0000 UTC m=+6.768479567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701213    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.964380    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-101100" podStartSLOduration=0.9643304 podStartE2EDuration="964.3304ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.964174289 +0000 UTC m=+6.315203004" watchObservedRunningTime="2024-05-14 00:16:55.9643304 +0000 UTC m=+6.315359015"
	I0514 00:18:10.701260    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.985118    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-101100" podStartSLOduration=0.985100539 podStartE2EDuration="985.100539ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.984806519 +0000 UTC m=+6.335835134" watchObservedRunningTime="2024-05-14 00:16:55.985100539 +0000 UTC m=+6.336129154"
	I0514 00:18:10.701260    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.362973    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.701301    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.363041    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.363025821 +0000 UTC m=+7.714054436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.701348    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463836    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701398    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463868    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701443    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463923    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.46390701 +0000 UTC m=+7.814935725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701443    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.377986    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.701480    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.378101    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.378049439 +0000 UTC m=+9.729078054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.701525    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478290    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701562    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478356    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701607    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478448    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.478431994 +0000 UTC m=+9.829460709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701644    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899119    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.701690    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899678    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.701690    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.394980    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.701728    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.395173    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.39515828 +0000 UTC m=+13.746186895 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.701772    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496260    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701772    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496313    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701809    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496438    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.496350091 +0000 UTC m=+13.847378806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701891    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.891391    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.701891    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.901591    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.701937    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.914896    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.701974    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.898894    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702019    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.899345    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702019    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445887    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.702056    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445965    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.44595071 +0000 UTC m=+21.796979425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.702101    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547258    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702101    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547292    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702182    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547346    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.547331033 +0000 UTC m=+21.898359648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702220    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.899515    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702220    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.900290    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702265    4316 command_runner.go:130] > May 14 00:17:04 multinode-101100 kubelet[1520]: E0514 00:17:04.893282    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.702302    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900260    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702347    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702383    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899212    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702429    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899658    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702465    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.895008    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.702465    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899381    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702512    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899884    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702549    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508629    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.702593    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508833    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.508813455 +0000 UTC m=+37.859842170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.702629    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609334    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702629    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609455    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702710    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609579    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.609562102 +0000 UTC m=+37.960590817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702779    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899431    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702779    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899749    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702850    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.898578    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702850    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.899676    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:14 multinode-101100 kubelet[1520]: E0514 00:17:14.897029    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.899665    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.900476    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.766386    1520 scope.go:117] "RemoveContainer" containerID="9c4eb727cedb65853cc3a94fdcc3e267ed41cd9cb15ef1cc1bb84f6f2278c9c4"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.767364    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.767901    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-9q2tv_kube-system(5b3ee167-f21f-46b3-bace-03a7233717e0)\"" pod="kube-system/kindnet-9q2tv" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.898891    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.899300    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.898102    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899045    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899315    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900488    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900677    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899091    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899625    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:24 multinode-101100 kubelet[1520]: E0514 00:17:24.899382    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900463    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.703445    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900948    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.703483    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550622    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550839    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.550821042 +0000 UTC m=+69.901849657 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651942    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651988    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.652038    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.652024653 +0000 UTC m=+70.003053368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.900302    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.901190    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.901408    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.913749    1520 scope.go:117] "RemoveContainer" containerID="e6ee22ee5c1b88cb0b1190c646094aefe229bfbd4486f007cde2b36da39ca886"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.914050    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.914651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a92f04b8-a93f-42d8-81d7-d4da6bf2e247)\"" pod="kube-system/storage-provisioner" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.898652    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.899154    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.900744    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.900407    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.902295    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.704086    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.898560    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.704124    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.899627    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.704158    4316 command_runner.go:130] > May 14 00:17:39 multinode-101100 kubelet[1520]: I0514 00:17:39.899892    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.888753    1520 scope.go:117] "RemoveContainer" containerID="eda79d47d28ffbc726bec7eaad072eeebb31ec439ed9bbe9fd544b9913b8f3ea"
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: E0514 00:17:49.924547    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.932695    1520 scope.go:117] "RemoveContainer" containerID="06f1a683cad8348fc4f8e339f226bbda12c4e8c1025c7acb52e2792253dd3008"
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.478966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d"
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.543407    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d"
	I0514 00:18:10.742680    4316 logs.go:123] Gathering logs for dmesg ...
	I0514 00:18:10.742680    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0514 00:18:10.762337    4316 command_runner.go:130] > [May14 00:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.104207] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.023601] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.058832] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.024495] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0514 00:18:10.762896    4316 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0514 00:18:10.762896    4316 command_runner.go:130] > [  +5.692465] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0514 00:18:10.762896    4316 command_runner.go:130] > [  +0.707713] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0514 00:18:10.762933    4316 command_runner.go:130] > [  +1.789899] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0514 00:18:10.762933    4316 command_runner.go:130] > [  +7.282690] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0514 00:18:10.762933    4316 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0514 00:18:10.762933    4316 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0514 00:18:10.762933    4316 command_runner.go:130] > [May14 00:16] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	I0514 00:18:10.762933    4316 command_runner.go:130] > [  +0.158382] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	I0514 00:18:10.762989    4316 command_runner.go:130] > [ +23.750429] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	I0514 00:18:10.762989    4316 command_runner.go:130] > [  +0.111929] kauditd_printk_skb: 73 callbacks suppressed
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +0.464883] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +0.164872] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +0.194348] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +2.832176] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +0.181315] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +0.160798] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	I0514 00:18:10.763163    4316 command_runner.go:130] > [  +0.238904] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +0.787359] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +0.085936] kauditd_printk_skb: 205 callbacks suppressed
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +3.384697] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +1.802132] kauditd_printk_skb: 64 callbacks suppressed
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +5.213940] kauditd_printk_skb: 10 callbacks suppressed
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +3.471694] systemd-fstab-generator[2315]: Ignoring "noauto" option for root device
	I0514 00:18:10.763200    4316 command_runner.go:130] > [May14 00:17] kauditd_printk_skb: 70 callbacks suppressed
	I0514 00:18:10.765058    4316 logs.go:123] Gathering logs for kube-apiserver [da9e6534cd87] ...
	I0514 00:18:10.765058    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e6534cd87"
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.020111       1 options.go:221] external host was not specified, using 172.23.102.122
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.031119       1 server.go:148] Version: v1.30.0
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.031201       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.560170       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.562027       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.567323       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.562214       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.570134       1 instance.go:299] Using reconciler: lease
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:53.544464       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:53.544866       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:53.780904       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:53.781233       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.015006       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.172205       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.186014       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.186188       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.186609       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.187573       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.187695       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.188811       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.190200       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.190309       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.190366       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.192283       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.192583       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.193726       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.193833       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.193842       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.194656       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.194769       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.194831       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.195773       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.200522       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.200808       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.201073       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.202173       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.202352       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.202465       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.204036       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.204232       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.213708       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.213869       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.213992       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.214976       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.215217       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.215317       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.226860       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.227134       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.227258       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.230259       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.232567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.232734       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.232824       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.239186       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.239294       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.239304       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.241605       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.241703       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.241712       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.242373       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.242466       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.259244       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.259536       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.792225       1 secure_serving.go:213] Serving securely on [::]:8443
	I0514 00:18:10.791303    4316 command_runner.go:130] ! I0514 00:16:54.792432       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.794552       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.794677       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.794720       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.795157       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.795787       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.795995       1 controller.go:116] Starting legacy_token_tracking_controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.796042       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.796156       1 controller.go:78] Starting OpenAPI AggregationController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.796272       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.797969       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.798688       1 available_controller.go:423] Starting AvailableConditionController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.798701       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.799424       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.799667       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.799692       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.800971       1 aggregator.go:163] waiting for initial CRD sync...
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.792447       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.792459       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.792473       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812587       1 controller.go:139] Starting OpenAPI controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812611       1 controller.go:87] Starting OpenAPI V3 controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812626       1 naming_controller.go:291] Starting NamingConditionController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812640       1 establishing_controller.go:76] Starting EstablishingController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812660       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812674       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812685       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.848957       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.849152       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.850275       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.850299       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.906495       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.938841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.950730       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.950897       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.951294       1 aggregator.go:165] initial CRD sync complete...
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.951545       1 autoregister_controller.go:141] Starting autoregister controller
	I0514 00:18:10.791967    4316 command_runner.go:130] ! I0514 00:16:54.951793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0514 00:18:10.791967    4316 command_runner.go:130] ! I0514 00:16:54.951875       1 cache.go:39] Caches are synced for autoregister controller
	I0514 00:18:10.792057    4316 command_runner.go:130] ! I0514 00:16:54.962299       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0514 00:18:10.792232    4316 command_runner.go:130] ! I0514 00:16:54.968027       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:10.792323    4316 command_runner.go:130] ! I0514 00:16:54.968302       1 policy_source.go:224] refreshing policies
	I0514 00:18:10.792414    4316 command_runner.go:130] ! I0514 00:16:54.997391       1 shared_informer.go:320] Caches are synced for configmaps
	I0514 00:18:10.792504    4316 command_runner.go:130] ! I0514 00:16:54.999391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 00:18:10.792594    4316 command_runner.go:130] ! I0514 00:16:54.999732       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0514 00:18:10.792682    4316 command_runner.go:130] ! I0514 00:16:54.999871       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0514 00:18:10.792772    4316 command_runner.go:130] ! I0514 00:16:55.037244       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 00:18:10.792861    4316 command_runner.go:130] ! I0514 00:16:55.824524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0514 00:18:10.792951    4316 command_runner.go:130] ! W0514 00:16:56.521956       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122 172.23.106.39]
	I0514 00:18:10.793042    4316 command_runner.go:130] ! I0514 00:16:56.523614       1 controller.go:615] quota admission added evaluator for: endpoints
	I0514 00:18:10.793132    4316 command_runner.go:130] ! I0514 00:16:56.536716       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 00:18:10.793223    4316 command_runner.go:130] ! I0514 00:16:57.861026       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0514 00:18:10.793314    4316 command_runner.go:130] ! I0514 00:16:58.068043       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0514 00:18:10.793404    4316 command_runner.go:130] ! I0514 00:16:58.085925       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0514 00:18:10.793494    4316 command_runner.go:130] ! I0514 00:16:58.189328       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 00:18:10.793581    4316 command_runner.go:130] ! I0514 00:16:58.200849       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0514 00:18:10.793711    4316 command_runner.go:130] ! W0514 00:17:16.528300       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122]
	I0514 00:18:10.800185    4316 logs.go:123] Gathering logs for kube-scheduler [d3581c1c570c] ...
	I0514 00:18:10.800185    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3581c1c570c"
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:52.716401       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:10.823372    4316 command_runner.go:130] ! W0514 00:16:54.858727       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:10.823372    4316 command_runner.go:130] ! W0514 00:16:54.858778       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.823372    4316 command_runner.go:130] ! W0514 00:16:54.858790       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:10.823372    4316 command_runner.go:130] ! W0514 00:16:54.858800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.945438       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.945867       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.953986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.957180       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.957284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.957493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:55.058381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:10.825653    4316 logs.go:123] Gathering logs for etcd [08450c853590] ...
	I0514 00:18:10.825691    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08450c853590"
	I0514 00:18:10.856035    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.687231Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.691397Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.102.122:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.102.122:2380","--initial-cluster=multinode-101100=https://172.23.102.122:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.102.122:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.102.122:2380","--name=multinode-101100","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.692425Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.693634Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.693771Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.102.122:2380"]}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.694117Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.703219Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"]}
	I0514 00:18:10.857021    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.704312Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-101100","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0514 00:18:10.857021    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.7264Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.905879ms"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.748539Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.766395Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","commit-index":1898}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=()"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became follower at term 2"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.768086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6e4c15c3d0f3380f [peers: [], term: 2, commit: 1898, applied: 0, lastindex: 1898, lastterm: 2]"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.782157Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.786938Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1096}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.797876Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1653}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.80426Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.81216Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6e4c15c3d0f3380f","timeout":"7s"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.813213Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6e4c15c3d0f3380f"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.814234Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"6e4c15c3d0f3380f","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.815302Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816695Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816877Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0514 00:18:10.857636    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0514 00:18:10.857636    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=(7947751373170489359)"}
	I0514 00:18:10.857726    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817687Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","added-peer-id":"6e4c15c3d0f3380f","added-peer-peer-urls":["https://172.23.106.39:2380"]}
	I0514 00:18:10.857770    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","cluster-version":"3.5"}
	I0514 00:18:10.857770    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.818648Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0514 00:18:10.857770    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.83299Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:10.857944    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.834951Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6e4c15c3d0f3380f","initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0514 00:18:10.857944    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835138Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0514 00:18:10.857944    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835469Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.102.122:2380"}
	I0514 00:18:10.858045    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835603Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.102.122:2380"}
	I0514 00:18:10.858045    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.468953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f is starting a new election at term 2"}
	I0514 00:18:10.858045    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became pre-candidate at term 2"}
	I0514 00:18:10.858045    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgPreVoteResp from 6e4c15c3d0f3380f at term 2"}
	I0514 00:18:10.858167    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became candidate at term 3"}
	I0514 00:18:10.858167    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgVoteResp from 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:10.858167    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became leader at term 3"}
	I0514 00:18:10.858277    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6e4c15c3d0f3380f elected leader 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:10.858373    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479025Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6e4c15c3d0f3380f","local-member-attributes":"{Name:multinode-101100 ClientURLs:[https://172.23.102.122:2379]}","request-path":"/0/members/6e4c15c3d0f3380f/attributes","cluster-id":"bb849d1df0b559d7","publish-timeout":"7s"}
	I0514 00:18:10.858426    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:10.858458    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:10.858458    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0514 00:18:10.858458    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0514 00:18:10.858458    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.102.122:2379"}
	I0514 00:18:10.858565    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0514 00:18:10.864013    4316 logs.go:123] Gathering logs for coredns [dcc5a109288b] ...
	I0514 00:18:10.864544    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc5a109288b"
	I0514 00:18:10.892128    4316 command_runner.go:130] > .:53
	I0514 00:18:10.892190    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:10.892190    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:10.892190    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:10.892190    4316 command_runner.go:130] > [INFO] 127.0.0.1:53257 - 27032 "HINFO IN 6976640239659908905.245956973392320689. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05278328s
	I0514 00:18:10.892190    4316 logs.go:123] Gathering logs for kube-controller-manager [b87239d1199a] ...
	I0514 00:18:10.892190    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87239d1199a"
	I0514 00:18:10.918917    4316 command_runner.go:130] ! I0514 00:16:52.414723       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:10.918917    4316 command_runner.go:130] ! I0514 00:16:52.798318       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:10.918917    4316 command_runner.go:130] ! I0514 00:16:52.798456       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.919561    4316 command_runner.go:130] ! I0514 00:16:52.802364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:52.802939       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:52.803159       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:52.803510       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.867503       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.868219       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.874269       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.878308       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.878330       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.878409       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.878509       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.878517       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.882632       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:10.920296    4316 command_runner.go:130] ! I0514 00:16:56.882648       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:10.920607    4316 command_runner.go:130] ! I0514 00:16:56.882656       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.883478       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.883488       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.883496       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.885766       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.888273       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.888463       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:10.921563    4316 command_runner.go:130] ! I0514 00:16:56.889304       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.890244       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.890408       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.893619       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.903162       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.903183       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.969340       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.982656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.982729       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983268       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983299       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983354       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983426       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983451       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! W0514 00:16:56.983466       1 shared_informer.go:597] resyncPeriod 15h46m20.096782659s is smaller than resyncCheckPeriod 18h37m10.298700604s and the informer has already started. Changing it to 18h37m10.298700604s
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983922       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.984377       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.984435       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.984460       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.984478       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:10.922668    4316 command_runner.go:130] ! I0514 00:16:56.984528       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.984568       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.984736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.985288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.995607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.996188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.997004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.997141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.997174       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.997363       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.997373       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:57.003479       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:57.004086       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:57.004336       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:57.004348       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.031733       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.032143       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.032242       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.032648       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.034995       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.035109       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.035510       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.035544       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:10.923376    4316 command_runner.go:130] ! I0514 00:17:07.035551       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:10.923429    4316 command_runner.go:130] ! I0514 00:17:07.038183       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.038394       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.039212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.040784       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.041050       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.041194       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.043909       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.044044       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.044106       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.059101       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.059352       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.059503       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.062189       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.062615       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.062641       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.070971       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.071021       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.071151       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.071293       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:10.924106    4316 command_runner.go:130] ! I0514 00:17:07.071328       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.071388       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.083342       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.084321       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.084474       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.085952       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.086347       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.086569       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.088414       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.088731       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.089444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.091486       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.091650       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.091678       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.094570       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.095467       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.095818       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.097778       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.098911       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.098939       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.100648       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.101514       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.101659       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.103436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.103908       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.109194       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.109267       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.109496       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.113760       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.114024       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.114252       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.115259       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.116925       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.117254       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.117353       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.121368       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.121764       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.121788       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122156       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122248       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122301       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122371       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122432       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122464       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122706       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.925751    4316 command_runner.go:130] ! I0514 00:17:07.123282       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.123678       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.126535       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.126692       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:10.925783    4316 command_runner.go:130] ! E0514 00:17:07.165594       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.165634       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.218097       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.218271       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.218379       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.218721       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.265917       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.266033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.266045       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.315398       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.315511       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.315534       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.415899       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.416022       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.465981       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.466026       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.466177       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.466545       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.516337       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.516498       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.516515       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.567477       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.567616       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.567627       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.617346       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.617464       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.617476       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:10.926610    4316 command_runner.go:130] ! E0514 00:17:07.665765       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.665865       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.665876       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.671623       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.693623       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.703208       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.707002       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.707898       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.708010       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.708168       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.710800       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.710879       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.716140       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.716709       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.717695       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.717710       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.718924       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.723267       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.723378       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.723467       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.723495       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.726980       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.733271       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.733445       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.733467       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.733473       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.733480       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.739996       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.742032       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.744959       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.760453       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.762790       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.766175       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.767750       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.768151       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.779225       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.779406       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.784902       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.787441       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.790178       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.791571       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.797318       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.816750       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.836762       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.837127       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.869081       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.869544       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.869413       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.870789       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.898670       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.901033       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.904366       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.916125       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.977330       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.988956       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:08.134754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="230.307102ms"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.134896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.6µs"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.140785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="234.508146ms"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.140977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.3µs"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.412419       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.472034       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.472384       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:37.878702       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:18:01.608725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.75856ms"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:18:01.608844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.702µs"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:18:01.651304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.008µs"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:18:01.710123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.783088ms"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:18:01.711762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.302µs"
	I0514 00:18:10.943483    4316 logs.go:123] Gathering logs for container status ...
	I0514 00:18:10.943483    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0514 00:18:11.012111    4316 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0514 00:18:11.012224    4316 command_runner.go:130] > 3d0b2f0362eb4       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   8cb9b6d6d0915       busybox-fc5497c4f-xqj6w
	I0514 00:18:11.012224    4316 command_runner.go:130] > dcc5a109288b6       cbb01a7bd410d                                                                                         11 seconds ago       Running             coredns                   1                   1cccb5e8cee3b       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:11.012224    4316 command_runner.go:130] > bde84ba2d4ed7       6e38f40d628db                                                                                         32 seconds ago       Running             storage-provisioner       2                   468a0e2976ae4       storage-provisioner
	I0514 00:18:11.012334    4316 command_runner.go:130] > 2b424a7cd98c8       4950bb10b3f87                                                                                         44 seconds ago       Running             kindnet-cni               2                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:11.012391    4316 command_runner.go:130] > b7d8d9a5e5eaf       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:11.012482    4316 command_runner.go:130] > b142687b621f1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   468a0e2976ae4       storage-provisioner
	I0514 00:18:11.012482    4316 command_runner.go:130] > b2a1b31cd7dee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   a8ac60a565998       kube-proxy-zhcz6
	I0514 00:18:11.012584    4316 command_runner.go:130] > 08450c853590d       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   419648c0d4053       etcd-multinode-101100
	I0514 00:18:11.012647    4316 command_runner.go:130] > da9e6534cd87d       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   509b8407e0955       kube-apiserver-multinode-101100
	I0514 00:18:11.012709    4316 command_runner.go:130] > d3581c1c570cf       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   ddcaadef980ac       kube-scheduler-multinode-101100
	I0514 00:18:11.012771    4316 command_runner.go:130] > b87239d1199ab       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   659643d47b9ae       kube-controller-manager-multinode-101100
	I0514 00:18:11.012840    4316 command_runner.go:130] > 57dea5416eb67       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago       Exited              busybox                   0                   76d1b8ce19aba       busybox-fc5497c4f-xqj6w
	I0514 00:18:11.012902    4316 command_runner.go:130] > 76c5ab7859eff       cbb01a7bd410d                                                                                         21 minutes ago       Exited              coredns                   0                   8bb49b28c842a       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:11.012970    4316 command_runner.go:130] > 91edaaa00da23       a0bf559e280cf                                                                                         21 minutes ago       Exited              kube-proxy                0                   9bd694480978f       kube-proxy-zhcz6
	I0514 00:18:11.013032    4316 command_runner.go:130] > e96f94398d6dd       c7aad43836fa5                                                                                         22 minutes ago       Exited              kube-controller-manager   0                   da9268fd6556b       kube-controller-manager-multinode-101100
	I0514 00:18:11.013093    4316 command_runner.go:130] > 964887fc5d362       259c8277fcbbc                                                                                         22 minutes ago       Exited              kube-scheduler            0                   fcb3b27edcd2a       kube-scheduler-multinode-101100
	I0514 00:18:13.531771    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:18:13.531771    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:13.531841    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:13.531841    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:13.537239    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:18:13.537239    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:13.537239    4316 round_trippers.go:580]     Audit-Id: 8989f81f-81b8-463b-8a74-473c5dfd49a5
	I0514 00:18:13.537239    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:13.537239    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:13.537239    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:13.537239    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:13.537239    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:13 GMT
	I0514 00:18:13.539774    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1863"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0514 00:18:13.545454    4316 system_pods.go:59] 12 kube-system pods found
	I0514 00:18:13.545454    4316 system_pods.go:61] "coredns-7db6d8ff4d-4kmx4" [06858a47-f51b-48d8-a2a6-f60b8107be13] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "etcd-multinode-101100" [74cd34fe-a56b-453d-afb3-a9db3db0d5ba] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kindnet-2lwsm" [26b8beff-9849-4cbf-9a2b-8ef6354fa5ca] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kindnet-9q2tv" [5b3ee167-f21f-46b3-bace-03a7233717e0] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kindnet-tfbt8" [95a6d195-9e10-4569-902b-b56e495c9b86] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-apiserver-multinode-101100" [60889645-4c2d-4cfc-b322-c0f1b6e34503] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-controller-manager-multinode-101100" [1a74381a-7477-4fd3-b344-c4a230014f97] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-proxy-8zsgn" [af208cbd-fa8a-4822-9b19-dc30f63fa59c] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-proxy-b25hq" [d39f5818-3e88-4162-a7ce-734ca28103bf] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-proxy-zhcz6" [a9a488af-41ba-47f3-87b0-5a2f062afad6] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-scheduler-multinode-101100" [d7300c2d-377f-4061-bd34-5f7593b7e827] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "storage-provisioner" [a92f04b8-a93f-42d8-81d7-d4da6bf2e247] Running
	I0514 00:18:13.545454    4316 system_pods.go:74] duration metric: took 3.6060276s to wait for pod list to return data ...
	I0514 00:18:13.545454    4316 default_sa.go:34] waiting for default service account to be created ...
	I0514 00:18:13.545454    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/default/serviceaccounts
	I0514 00:18:13.545454    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:13.545454    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:13.545454    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:13.552270    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:18:13.552270    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:13.552270    4316 round_trippers.go:580]     Content-Length: 262
	I0514 00:18:13.552270    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:13 GMT
	I0514 00:18:13.552270    4316 round_trippers.go:580]     Audit-Id: eef845ef-8759-43a4-838e-441516c8f729
	I0514 00:18:13.552270    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:13.552270    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:13.552270    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:13.552270    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:13.552270    4316 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1864"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f8245e64-9479-49b1-8b02-d2e6351373e3","resourceVersion":"345","creationTimestamp":"2024-05-13T23:56:23Z"}}]}
	I0514 00:18:13.552270    4316 default_sa.go:45] found service account: "default"
	I0514 00:18:13.553293    4316 default_sa.go:55] duration metric: took 7.8381ms for default service account to be created ...
	I0514 00:18:13.553293    4316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0514 00:18:13.553293    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:18:13.553293    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:13.553293    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:13.553293    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:13.557410    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:18:13.557410    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:13.557410    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:13.557410    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:13.557410    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:13.557410    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:13 GMT
	I0514 00:18:13.557410    4316 round_trippers.go:580]     Audit-Id: 36974bc1-4a34-4f83-9e69-655bb9bb1689
	I0514 00:18:13.557410    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:13.559046    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1864"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0514 00:18:13.562523    4316 system_pods.go:86] 12 kube-system pods found
	I0514 00:18:13.562523    4316 system_pods.go:89] "coredns-7db6d8ff4d-4kmx4" [06858a47-f51b-48d8-a2a6-f60b8107be13] Running
	I0514 00:18:13.562523    4316 system_pods.go:89] "etcd-multinode-101100" [74cd34fe-a56b-453d-afb3-a9db3db0d5ba] Running
	I0514 00:18:13.562523    4316 system_pods.go:89] "kindnet-2lwsm" [26b8beff-9849-4cbf-9a2b-8ef6354fa5ca] Running
	I0514 00:18:13.562523    4316 system_pods.go:89] "kindnet-9q2tv" [5b3ee167-f21f-46b3-bace-03a7233717e0] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kindnet-tfbt8" [95a6d195-9e10-4569-902b-b56e495c9b86] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-apiserver-multinode-101100" [60889645-4c2d-4cfc-b322-c0f1b6e34503] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-controller-manager-multinode-101100" [1a74381a-7477-4fd3-b344-c4a230014f97] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-proxy-8zsgn" [af208cbd-fa8a-4822-9b19-dc30f63fa59c] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-proxy-b25hq" [d39f5818-3e88-4162-a7ce-734ca28103bf] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-proxy-zhcz6" [a9a488af-41ba-47f3-87b0-5a2f062afad6] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-scheduler-multinode-101100" [d7300c2d-377f-4061-bd34-5f7593b7e827] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "storage-provisioner" [a92f04b8-a93f-42d8-81d7-d4da6bf2e247] Running
	I0514 00:18:13.562606    4316 system_pods.go:126] duration metric: took 9.3132ms to wait for k8s-apps to be running ...
	I0514 00:18:13.562606    4316 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 00:18:13.569709    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 00:18:13.593636    4316 system_svc.go:56] duration metric: took 31.0274ms WaitForService to wait for kubelet
	I0514 00:18:13.593636    4316 kubeadm.go:576] duration metric: took 1m13.9197873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 00:18:13.593636    4316 node_conditions.go:102] verifying NodePressure condition ...
	I0514 00:18:13.593818    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes
	I0514 00:18:13.593818    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:13.593818    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:13.593818    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:13.596012    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:13.597047    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:13.597085    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:13.597085    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:13.597085    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:13.597085    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:13.597085    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:13 GMT
	I0514 00:18:13.597085    4316 round_trippers.go:580]     Audit-Id: 393d0d3e-05bc-4242-9acb-37031f44ad8c
	I0514 00:18:13.597594    4316 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1864"},"items":[{"metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16259 chars]
	I0514 00:18:13.598655    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:18:13.598755    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:18:13.598755    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:18:13.598755    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:18:13.598755    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:18:13.598755    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:18:13.598755    4316 node_conditions.go:105] duration metric: took 5.1189ms to run NodePressure ...
	I0514 00:18:13.598755    4316 start.go:240] waiting for startup goroutines ...
	I0514 00:18:13.598755    4316 start.go:245] waiting for cluster config update ...
	I0514 00:18:13.598906    4316 start.go:254] writing updated cluster config ...
	I0514 00:18:13.602892    4316 out.go:177] 
	I0514 00:18:13.606106    4316 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:18:13.617662    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:18:13.618329    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:18:13.622517    4316 out.go:177] * Starting "multinode-101100-m02" worker node in "multinode-101100" cluster
	I0514 00:18:13.626047    4316 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 00:18:13.626047    4316 cache.go:56] Caching tarball of preloaded images
	I0514 00:18:13.627409    4316 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0514 00:18:13.627563    4316 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0514 00:18:13.627740    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:18:13.629940    4316 start.go:360] acquireMachinesLock for multinode-101100-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0514 00:18:13.630021    4316 start.go:364] duration metric: took 80.7µs to acquireMachinesLock for "multinode-101100-m02"
	I0514 00:18:13.630207    4316 start.go:96] Skipping create...Using existing machine configuration
	I0514 00:18:13.630207    4316 fix.go:54] fixHost starting: m02
	I0514 00:18:13.630594    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:15.595326    4316 main.go:141] libmachine: [stdout =====>] : Off
	
	I0514 00:18:15.595326    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:15.595326    4316 fix.go:112] recreateIfNeeded on multinode-101100-m02: state=Stopped err=<nil>
	W0514 00:18:15.595326    4316 fix.go:138] unexpected machine state, will restart: <nil>
	I0514 00:18:15.597802    4316 out.go:177] * Restarting existing hyperv VM for "multinode-101100-m02" ...
	I0514 00:18:15.602068    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-101100-m02
	I0514 00:18:18.419508    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:18:18.419508    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:18.419508    4316 main.go:141] libmachine: Waiting for host to start...
	I0514 00:18:18.419508    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:20.447253    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:20.447253    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:20.447636    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:22.715374    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:18:22.716248    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:23.719516    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:25.665983    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:25.665983    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:25.665983    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:27.939881    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:18:27.939881    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:28.955227    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:30.938759    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:30.939457    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:30.939529    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:33.186613    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:18:33.187320    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:34.191867    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:36.230333    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:36.230333    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:36.230333    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:38.489721    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:18:38.489721    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:39.505162    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:41.491972    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:41.492654    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:41.492654    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:43.849440    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:18:43.850042    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:43.851849    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:45.777415    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:45.777415    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:45.777415    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:48.084790    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:18:48.084790    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:48.084790    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:18:48.086861    4316 machine.go:94] provisionDockerMachine start ...
	I0514 00:18:48.086913    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:50.013257    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:50.013257    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:50.013331    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:52.325832    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:18:52.325832    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:52.329461    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:18:52.330089    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:18:52.330089    4316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 00:18:52.466043    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0514 00:18:52.466043    4316 buildroot.go:166] provisioning hostname "multinode-101100-m02"
	I0514 00:18:52.466043    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:54.355964    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:54.355964    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:54.356414    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:56.624255    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:18:56.624255    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:56.628345    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:18:56.628478    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:18:56.628478    4316 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-101100-m02 && echo "multinode-101100-m02" | sudo tee /etc/hostname
	I0514 00:18:56.781283    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101100-m02
	
	I0514 00:18:56.781283    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:58.701836    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:58.702750    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:58.702750    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:00.983214    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:00.983214    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:00.987311    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:00.987488    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:00.987488    4316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-101100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-101100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-101100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 00:19:01.132677    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 00:19:01.132793    4316 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 00:19:01.132793    4316 buildroot.go:174] setting up certificates
	I0514 00:19:01.132793    4316 provision.go:84] configureAuth start
	I0514 00:19:01.132876    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:03.065570    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:03.065570    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:03.065570    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:05.447599    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:05.447599    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:05.447877    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:07.392388    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:07.392388    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:07.392634    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:09.718980    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:09.720082    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:09.720082    4316 provision.go:143] copyHostCerts
	I0514 00:19:09.720082    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0514 00:19:09.720082    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 00:19:09.720082    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 00:19:09.720791    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 00:19:09.721397    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0514 00:19:09.721926    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 00:19:09.722009    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 00:19:09.722009    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 00:19:09.723222    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0514 00:19:09.724232    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 00:19:09.724232    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 00:19:09.724232    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 00:19:09.725680    4316 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-101100-m02 san=[127.0.0.1 172.23.97.128 localhost minikube multinode-101100-m02]
	I0514 00:19:10.051821    4316 provision.go:177] copyRemoteCerts
	I0514 00:19:10.061215    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 00:19:10.061363    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:12.012557    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:12.012557    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:12.012557    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:14.334062    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:14.334062    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:14.334569    4316 sshutil.go:53] new ssh client: &{IP:172.23.97.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:19:14.449932    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.388437s)
	I0514 00:19:14.449932    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0514 00:19:14.449932    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 00:19:14.499297    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0514 00:19:14.499826    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0514 00:19:14.546386    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0514 00:19:14.547091    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0514 00:19:14.587714    4316 provision.go:87] duration metric: took 13.4539789s to configureAuth
	I0514 00:19:14.587714    4316 buildroot.go:189] setting minikube options for container-runtime
	I0514 00:19:14.588629    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:19:14.588629    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:16.496233    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:16.496233    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:16.496233    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:18.751837    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:18.751837    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:18.756423    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:18.757016    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:18.757016    4316 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 00:19:18.892580    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 00:19:18.892580    4316 buildroot.go:70] root file system type: tmpfs
	I0514 00:19:18.892775    4316 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 00:19:18.892831    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:20.791914    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:20.792235    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:20.792235    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:23.067078    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:23.067689    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:23.071582    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:23.072106    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:23.072189    4316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.102.122"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 00:19:23.233387    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.102.122
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 00:19:23.233539    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:25.121872    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:25.121872    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:25.122396    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:27.370520    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:27.370593    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:27.375540    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:27.375540    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:27.375540    4316 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 00:19:29.620481    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 00:19:29.620590    4316 machine.go:97] duration metric: took 41.5310232s to provisionDockerMachine
	I0514 00:19:29.620590    4316 start.go:293] postStartSetup for "multinode-101100-m02" (driver="hyperv")
	I0514 00:19:29.620590    4316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 00:19:29.630170    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 00:19:29.630170    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:31.552911    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:31.553116    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:31.553148    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:33.784752    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:33.784752    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:33.785375    4316 sshutil.go:53] new ssh client: &{IP:172.23.97.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:19:33.893903    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.26346s)
	I0514 00:19:33.906836    4316 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 00:19:33.915351    4316 command_runner.go:130] > NAME=Buildroot
	I0514 00:19:33.915351    4316 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0514 00:19:33.915351    4316 command_runner.go:130] > ID=buildroot
	I0514 00:19:33.915351    4316 command_runner.go:130] > VERSION_ID=2023.02.9
	I0514 00:19:33.915351    4316 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0514 00:19:33.916035    4316 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 00:19:33.916035    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 00:19:33.916574    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 00:19:33.917658    4316 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 00:19:33.917658    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0514 00:19:33.927803    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 00:19:33.945054    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 00:19:33.988022    4316 start.go:296] duration metric: took 4.367152s for postStartSetup
	I0514 00:19:33.988022    4316 fix.go:56] duration metric: took 1m20.3526907s for fixHost
	I0514 00:19:33.988022    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:35.871620    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:35.871887    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:35.871968    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:38.102858    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:38.103492    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:38.108496    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:38.108496    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:38.108496    4316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 00:19:38.237822    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715645978.467786522
	
	I0514 00:19:38.238360    4316 fix.go:216] guest clock: 1715645978.467786522
	I0514 00:19:38.238360    4316 fix.go:229] Guest: 2024-05-14 00:19:38.467786522 +0000 UTC Remote: 2024-05-14 00:19:33.9880222 +0000 UTC m=+277.905688301 (delta=4.479764322s)
	I0514 00:19:38.238463    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:40.120852    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:40.121011    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:40.121011    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:42.346874    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:42.346874    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:42.351562    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:42.351562    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:42.351562    4316 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715645978
	I0514 00:19:42.503079    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 00:19:38 UTC 2024
	
	I0514 00:19:42.503133    4316 fix.go:236] clock set: Tue May 14 00:19:38 UTC 2024
	 (err=<nil>)
	I0514 00:19:42.503133    4316 start.go:83] releasing machines lock for "multinode-101100-m02", held for 1m28.8673503s
	I0514 00:19:42.503403    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:44.384635    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:44.384635    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:44.384635    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:46.653431    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:46.653431    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:46.656221    4316 out.go:177] * Found network options:
	I0514 00:19:46.658525    4316 out.go:177]   - NO_PROXY=172.23.102.122
	W0514 00:19:46.660915    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	I0514 00:19:46.662961    4316 out.go:177]   - NO_PROXY=172.23.102.122
	W0514 00:19:46.666175    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	W0514 00:19:46.667615    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	I0514 00:19:46.669610    4316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 00:19:46.669684    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:46.677572    4316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0514 00:19:46.678153    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:48.615764    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:48.615824    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:48.615824    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:48.640727    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:48.641101    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:48.641101    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:51.004312    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:51.004312    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:51.005234    4316 sshutil.go:53] new ssh client: &{IP:172.23.97.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:19:51.025238    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:51.025238    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:51.025238    4316 sshutil.go:53] new ssh client: &{IP:172.23.97.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:19:51.208753    4316 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0514 00:19:51.215947    4316 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5459721s)
	I0514 00:19:51.215947    4316 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0514 00:19:51.215947    4316 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.538085s)
	W0514 00:19:51.215947    4316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 00:19:51.225018    4316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 00:19:51.250823    4316 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0514 00:19:51.251610    4316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 00:19:51.251681    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:19:51.251681    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:19:51.281668    4316 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0514 00:19:51.290468    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 00:19:51.316939    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 00:19:51.334713    4316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 00:19:51.342698    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 00:19:51.368019    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:19:51.396019    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 00:19:51.422277    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:19:51.450060    4316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 00:19:51.476813    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 00:19:51.503148    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 00:19:51.528279    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 00:19:51.555277    4316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 00:19:51.572253    4316 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0514 00:19:51.580107    4316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 00:19:51.605106    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:19:51.773755    4316 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 00:19:51.800702    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:19:51.811030    4316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 00:19:51.830848    4316 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0514 00:19:51.830848    4316 command_runner.go:130] > [Unit]
	I0514 00:19:51.830848    4316 command_runner.go:130] > Description=Docker Application Container Engine
	I0514 00:19:51.830848    4316 command_runner.go:130] > Documentation=https://docs.docker.com
	I0514 00:19:51.830848    4316 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0514 00:19:51.830848    4316 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0514 00:19:51.830848    4316 command_runner.go:130] > StartLimitBurst=3
	I0514 00:19:51.830848    4316 command_runner.go:130] > StartLimitIntervalSec=60
	I0514 00:19:51.830848    4316 command_runner.go:130] > [Service]
	I0514 00:19:51.830848    4316 command_runner.go:130] > Type=notify
	I0514 00:19:51.830848    4316 command_runner.go:130] > Restart=on-failure
	I0514 00:19:51.830848    4316 command_runner.go:130] > Environment=NO_PROXY=172.23.102.122
	I0514 00:19:51.830848    4316 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0514 00:19:51.830848    4316 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0514 00:19:51.830848    4316 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0514 00:19:51.830848    4316 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0514 00:19:51.830848    4316 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0514 00:19:51.830848    4316 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0514 00:19:51.830848    4316 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0514 00:19:51.830848    4316 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0514 00:19:51.830848    4316 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0514 00:19:51.830848    4316 command_runner.go:130] > ExecStart=
	I0514 00:19:51.830848    4316 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0514 00:19:51.830848    4316 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0514 00:19:51.830848    4316 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0514 00:19:51.830848    4316 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0514 00:19:51.830848    4316 command_runner.go:130] > LimitNOFILE=infinity
	I0514 00:19:51.830848    4316 command_runner.go:130] > LimitNPROC=infinity
	I0514 00:19:51.830848    4316 command_runner.go:130] > LimitCORE=infinity
	I0514 00:19:51.830848    4316 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0514 00:19:51.830848    4316 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0514 00:19:51.830848    4316 command_runner.go:130] > TasksMax=infinity
	I0514 00:19:51.830848    4316 command_runner.go:130] > TimeoutStartSec=0
	I0514 00:19:51.830848    4316 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0514 00:19:51.830848    4316 command_runner.go:130] > Delegate=yes
	I0514 00:19:51.831378    4316 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0514 00:19:51.831378    4316 command_runner.go:130] > KillMode=process
	I0514 00:19:51.831378    4316 command_runner.go:130] > [Install]
	I0514 00:19:51.831378    4316 command_runner.go:130] > WantedBy=multi-user.target
	I0514 00:19:51.839535    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:19:51.865772    4316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 00:19:51.912691    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:19:51.951980    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:19:51.983632    4316 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 00:19:52.045579    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:19:52.067656    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:19:52.098073    4316 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0514 00:19:52.111889    4316 ssh_runner.go:195] Run: which cri-dockerd
	I0514 00:19:52.119036    4316 command_runner.go:130] > /usr/bin/cri-dockerd
	I0514 00:19:52.127858    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 00:19:52.144937    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 00:19:52.185057    4316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 00:19:52.357323    4316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 00:19:52.544596    4316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 00:19:52.544732    4316 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 00:19:52.586210    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:19:52.769373    4316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 00:19:55.326422    4316 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5568288s)
	I0514 00:19:55.334572    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 00:19:55.364366    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:19:55.398019    4316 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 00:19:55.571997    4316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 00:19:55.742930    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:19:55.921722    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 00:19:55.959197    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:19:55.989752    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:19:56.162754    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 00:19:56.260792    4316 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 00:19:56.268642    4316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 00:19:56.276468    4316 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0514 00:19:56.276468    4316 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0514 00:19:56.276590    4316 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0514 00:19:56.276590    4316 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0514 00:19:56.276590    4316 command_runner.go:130] > Access: 2024-05-14 00:19:56.418179553 +0000
	I0514 00:19:56.276590    4316 command_runner.go:130] > Modify: 2024-05-14 00:19:56.418179553 +0000
	I0514 00:19:56.276590    4316 command_runner.go:130] > Change: 2024-05-14 00:19:56.421179722 +0000
	I0514 00:19:56.276590    4316 command_runner.go:130] >  Birth: -
	I0514 00:19:56.276590    4316 start.go:562] Will wait 60s for crictl version
	I0514 00:19:56.284826    4316 ssh_runner.go:195] Run: which crictl
	I0514 00:19:56.290588    4316 command_runner.go:130] > /usr/bin/crictl
	I0514 00:19:56.299029    4316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 00:19:56.349024    4316 command_runner.go:130] > Version:  0.1.0
	I0514 00:19:56.349024    4316 command_runner.go:130] > RuntimeName:  docker
	I0514 00:19:56.349285    4316 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0514 00:19:56.349285    4316 command_runner.go:130] > RuntimeApiVersion:  v1
	I0514 00:19:56.349285    4316 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 00:19:56.356061    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:19:56.381668    4316 command_runner.go:130] > 26.0.2
	I0514 00:19:56.390685    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:19:56.416664    4316 command_runner.go:130] > 26.0.2
	I0514 00:19:56.421104    4316 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 00:19:56.423482    4316 out.go:177]   - env NO_PROXY=172.23.102.122
	I0514 00:19:56.425105    4316 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 00:19:56.428661    4316 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 00:19:56.428661    4316 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 00:19:56.428661    4316 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 00:19:56.428661    4316 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 00:19:56.430655    4316 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 00:19:56.430655    4316 ip.go:210] interface addr: 172.23.96.1/20
	I0514 00:19:56.440655    4316 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 00:19:56.446687    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:19:56.465727    4316 mustload.go:65] Loading cluster: multinode-101100
	I0514 00:19:56.466336    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:19:56.466947    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:19:58.362298    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:58.362298    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:58.362298    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:19:58.363737    4316 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100 for IP: 172.23.97.128
	I0514 00:19:58.363737    4316 certs.go:194] generating shared ca certs ...
	I0514 00:19:58.363828    4316 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:19:58.364332    4316 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 00:19:58.364566    4316 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 00:19:58.364808    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0514 00:19:58.365072    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0514 00:19:58.365213    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0514 00:19:58.365213    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0514 00:19:58.365213    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 00:19:58.365812    4316 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 00:19:58.365843    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 00:19:58.366079    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 00:19:58.366293    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 00:19:58.366436    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 00:19:58.366436    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 00:19:58.366436    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:19:58.366962    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0514 00:19:58.367043    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0514 00:19:58.367261    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 00:19:58.414702    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 00:19:58.459357    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 00:19:58.503434    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 00:19:58.545685    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 00:19:58.587861    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 00:19:58.629568    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 00:19:58.680987    4316 ssh_runner.go:195] Run: openssl version
	I0514 00:19:58.688460    4316 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0514 00:19:58.698963    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 00:19:58.725027    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 00:19:58.731571    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:19:58.731669    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:19:58.739103    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 00:19:58.747592    4316 command_runner.go:130] > 51391683
	I0514 00:19:58.754967    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 00:19:58.782080    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 00:19:58.809376    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 00:19:58.814825    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:19:58.815513    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:19:58.823670    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 00:19:58.831445    4316 command_runner.go:130] > 3ec20f2e
	I0514 00:19:58.839843    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 00:19:58.870367    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 00:19:58.896373    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:19:58.904136    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:19:58.904136    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:19:58.911982    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:19:58.920632    4316 command_runner.go:130] > b5213941
	I0514 00:19:58.930068    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 00:19:58.957075    4316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 00:19:58.964129    4316 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 00:19:58.964129    4316 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 00:19:58.964657    4316 kubeadm.go:928] updating node {m02 172.23.97.128 8443 v1.30.0 docker false true} ...
	I0514 00:19:58.964749    4316 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-101100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.97.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0514 00:19:58.972565    4316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 00:19:58.990072    4316 command_runner.go:130] > kubeadm
	I0514 00:19:58.990072    4316 command_runner.go:130] > kubectl
	I0514 00:19:58.990072    4316 command_runner.go:130] > kubelet
	I0514 00:19:58.990072    4316 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 00:19:59.001506    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0514 00:19:59.018193    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0514 00:19:59.047911    4316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 00:19:59.084815    4316 ssh_runner.go:195] Run: grep 172.23.102.122	control-plane.minikube.internal$ /etc/hosts
	I0514 00:19:59.090918    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.102.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:19:59.118549    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:19:59.295846    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:19:59.320107    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:19:59.320829    4316 start.go:316] joinCluster: &{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.122 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.102.231 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provi
sioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 00:19:59.320939    4316 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0514 00:19:59.321044    4316 host.go:66] Checking if "multinode-101100-m02" exists ...
	I0514 00:19:59.321423    4316 mustload.go:65] Loading cluster: multinode-101100
	I0514 00:19:59.321782    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:19:59.322241    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:20:01.267151    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:01.267701    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:01.267701    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:20:01.268409    4316 api_server.go:166] Checking apiserver status ...
	I0514 00:20:01.281769    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:20:01.281769    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:20:03.282523    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:03.283066    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:03.283066    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:05.625428    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:20:05.626217    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:05.626634    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:20:05.742922    4316 command_runner.go:130] > 1838
	I0514 00:20:05.743002    4316 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.4609465s)
	I0514 00:20:05.753442    4316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1838/cgroup
	W0514 00:20:05.770851    4316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1838/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0514 00:20:05.782794    4316 ssh_runner.go:195] Run: ls
	I0514 00:20:05.789601    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:20:05.798214    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 200:
	ok
	I0514 00:20:05.806299    4316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-101100-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0514 00:20:05.964406    4316 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-2lwsm, kube-system/kube-proxy-b25hq
	I0514 00:20:08.986976    4316 command_runner.go:130] > node/multinode-101100-m02 cordoned
	I0514 00:20:08.987180    4316 command_runner.go:130] > pod "busybox-fc5497c4f-q7442" has DeletionTimestamp older than 1 seconds, skipping
	I0514 00:20:08.987180    4316 command_runner.go:130] > node/multinode-101100-m02 drained
	I0514 00:20:08.987298    4316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-101100-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1807388s)
	I0514 00:20:08.987298    4316 node.go:128] successfully drained node "multinode-101100-m02"
	I0514 00:20:08.987425    4316 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0514 00:20:08.987592    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:20:10.872392    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:10.872392    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:10.872392    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:13.132030    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:20:13.132030    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:13.132030    4316 sshutil.go:53] new ssh client: &{IP:172.23.97.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:20:13.514414    4316 command_runner.go:130] ! W0514 00:20:13.747274    1538 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0514 00:20:14.000423    4316 command_runner.go:130] ! W0514 00:20:14.233795    1538 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod a7476f13d104b3e1959acab279fd2b27a5c1e30de2afc09d28850c1a79234209: output: E0514 00:20:13.966689    1577 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-q7442_default\" network: cni config uninitialized" podSandboxID="a7476f13d104b3e1959acab279fd2b27a5c1e30de2afc09d28850c1a79234209"
	I0514 00:20:14.000423    4316 command_runner.go:130] ! time="2024-05-14T00:20:13Z" level=fatal msg="stopping the pod sandbox \"a7476f13d104b3e1959acab279fd2b27a5c1e30de2afc09d28850c1a79234209\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-q7442_default\" network: cni config uninitialized"
	I0514 00:20:14.000423    4316 command_runner.go:130] ! : exit status 1
	I0514 00:20:14.020545    4316 command_runner.go:130] > [preflight] Running pre-flight checks
	I0514 00:20:14.020668    4316 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0514 00:20:14.020668    4316 command_runner.go:130] > [reset] Stopping the kubelet service
	I0514 00:20:14.020668    4316 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0514 00:20:14.020668    4316 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0514 00:20:14.020668    4316 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0514 00:20:14.020668    4316 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0514 00:20:14.020668    4316 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0514 00:20:14.020668    4316 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0514 00:20:14.020668    4316 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0514 00:20:14.020839    4316 command_runner.go:130] > to reset your system's IPVS tables.
	I0514 00:20:14.020839    4316 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0514 00:20:14.020867    4316 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0514 00:20:14.020867    4316 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.0331194s)
	I0514 00:20:14.020867    4316 node.go:155] successfully reset node "multinode-101100-m02"
	I0514 00:20:14.021881    4316 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:20:14.022405    4316 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.102.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 00:20:14.023033    4316 cert_rotation.go:137] Starting client certificate rotation controller
	I0514 00:20:14.023643    4316 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0514 00:20:14.023643    4316 round_trippers.go:463] DELETE https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:14.023643    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:14.023643    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:14.023643    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:14.023643    4316 round_trippers.go:473]     Content-Type: application/json
	I0514 00:20:14.039561    4316 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0514 00:20:14.039561    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:14.039561    4316 round_trippers.go:580]     Content-Length: 171
	I0514 00:20:14.039561    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:14 GMT
	I0514 00:20:14.039561    4316 round_trippers.go:580]     Audit-Id: 9d463315-fe38-4c7b-b5a0-d43f8cd931fb
	I0514 00:20:14.039561    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:14.039561    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:14.039561    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:14.039561    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:14.039561    4316 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-101100-m02","kind":"nodes","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85"}}
	I0514 00:20:14.040164    4316 node.go:180] successfully deleted node "multinode-101100-m02"
	I0514 00:20:14.040164    4316 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0514 00:20:14.040231    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0514 00:20:14.040291    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:20:15.927718    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:15.927718    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:15.927718    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:18.201548    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:20:18.201958    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:18.202397    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:20:18.374719    4316 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gyjkyc.rxhb3b7de4hp8phm --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0514 00:20:18.374719    4316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3342099s)
	I0514 00:20:18.374719    4316 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0514 00:20:18.374719    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gyjkyc.rxhb3b7de4hp8phm --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-101100-m02"
	I0514 00:20:18.563178    4316 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0514 00:20:19.902974    4316 command_runner.go:130] > [preflight] Running pre-flight checks
	I0514 00:20:19.902974    4316 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0514 00:20:19.902974    4316 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0514 00:20:19.902974    4316 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0514 00:20:19.903162    4316 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0514 00:20:19.903162    4316 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0514 00:20:19.903162    4316 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0514 00:20:19.903162    4316 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002330449s
	I0514 00:20:19.903274    4316 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0514 00:20:19.903274    4316 command_runner.go:130] > This node has joined the cluster:
	I0514 00:20:19.903328    4316 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0514 00:20:19.903364    4316 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0514 00:20:19.903364    4316 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0514 00:20:19.903364    4316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gyjkyc.rxhb3b7de4hp8phm --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-101100-m02": (1.5285473s)
	I0514 00:20:19.903364    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0514 00:20:20.109428    4316 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0514 00:20:20.291006    4316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-101100-m02 minikube.k8s.io/updated_at=2024_05_14T00_20_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=multinode-101100 minikube.k8s.io/primary=false
	I0514 00:20:20.403705    4316 command_runner.go:130] > node/multinode-101100-m02 labeled
	I0514 00:20:20.403803    4316 start.go:318] duration metric: took 21.0816221s to joinCluster
	I0514 00:20:20.403895    4316 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0514 00:20:20.407621    4316 out.go:177] * Verifying Kubernetes components...
	I0514 00:20:20.404440    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:20:20.420742    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:20:20.628880    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:20:20.663973    4316 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:20:20.664375    4316 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.102.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 00:20:20.665089    4316 node_ready.go:35] waiting up to 6m0s for node "multinode-101100-m02" to be "Ready" ...
	I0514 00:20:20.665089    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:20.665089    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:20.665089    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:20.665089    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:20.677455    4316 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0514 00:20:20.677455    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:20.677455    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:20.677455    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:20.677455    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:20.677455    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:20 GMT
	I0514 00:20:20.677455    4316 round_trippers.go:580]     Audit-Id: 4f488f36-facd-4f63-be23-a295b926cc9a
	I0514 00:20:20.677455    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:20.677455    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"1999","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0514 00:20:21.178898    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:21.179047    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:21.179047    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:21.179047    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:21.189724    4316 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0514 00:20:21.189724    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:21.189724    4316 round_trippers.go:580]     Audit-Id: 7904380d-f5cd-4f00-81c9-968f56135bb0
	I0514 00:20:21.189724    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:21.189724    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:21.189724    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:21.189724    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:21.189724    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:21 GMT
	I0514 00:20:21.189724    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"1999","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0514 00:20:21.668458    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:21.668458    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:21.668458    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:21.668458    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:21.674275    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:20:21.674275    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:21.674275    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:21 GMT
	I0514 00:20:21.674275    4316 round_trippers.go:580]     Audit-Id: cd16b4f3-67c4-4c90-9b2d-78228fd691f5
	I0514 00:20:21.674275    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:21.674275    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:21.674275    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:21.674275    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:21.675139    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"1999","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0514 00:20:22.175885    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:22.175885    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:22.175885    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:22.175885    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:22.179828    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:22.180344    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:22.180344    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:22 GMT
	I0514 00:20:22.180344    4316 round_trippers.go:580]     Audit-Id: c458170d-00d4-4dae-b03d-855900e80ad8
	I0514 00:20:22.180344    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:22.180344    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:22.180344    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:22.180344    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:22.180344    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"1999","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0514 00:20:22.675734    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:22.675805    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:22.675805    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:22.675805    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:22.678052    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:22.678052    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:22.678052    4316 round_trippers.go:580]     Audit-Id: d2bbaeba-16e5-4d26-99e5-bb2962aa8b6b
	I0514 00:20:22.678052    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:22.678052    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:22.678052    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:22.678052    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:22.678842    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:22 GMT
	I0514 00:20:22.678842    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"1999","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0514 00:20:22.678842    4316 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0514 00:20:23.174368    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:23.174812    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:23.175029    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:23.175029    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:23.178422    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:23.178873    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:23.178873    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:23 GMT
	I0514 00:20:23.178941    4316 round_trippers.go:580]     Audit-Id: e3d84f71-b647-4a4f-a589-f5db06f83577
	I0514 00:20:23.178941    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:23.178941    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:23.178941    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:23.178941    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:23.179425    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:23.675729    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:23.675729    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:23.675729    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:23.675729    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:23.678954    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:23.678954    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:23.678954    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:23.678954    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:23 GMT
	I0514 00:20:23.678954    4316 round_trippers.go:580]     Audit-Id: 83b98649-baaf-48bb-a953-f2b2a96298a4
	I0514 00:20:23.678954    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:23.678954    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:23.678954    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:23.679290    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:24.172786    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:24.172786    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:24.172862    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:24.172862    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:24.176750    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:24.176750    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:24.176750    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:24.176750    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:24.176750    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:24.176750    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:24 GMT
	I0514 00:20:24.176750    4316 round_trippers.go:580]     Audit-Id: 06b5bf1b-4975-48b3-a94e-dedbae892198
	I0514 00:20:24.176750    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:24.177368    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:24.673432    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:24.673432    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:24.673432    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:24.673432    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:24.677995    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:20:24.678210    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:24.678210    4316 round_trippers.go:580]     Audit-Id: 7565a9bd-70ff-47d5-b68e-54e4bc889056
	I0514 00:20:24.678210    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:24.678210    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:24.678210    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:24.678210    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:24.678210    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:24 GMT
	I0514 00:20:24.678945    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:25.173269    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:25.173390    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:25.173390    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:25.173390    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:25.176577    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:25.176577    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:25.176577    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:25.176577    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:25.176577    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:25 GMT
	I0514 00:20:25.176577    4316 round_trippers.go:580]     Audit-Id: 018fba78-7a56-4803-93f4-61b7fae28f2f
	I0514 00:20:25.176577    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:25.177495    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:25.177595    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:25.178390    4316 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0514 00:20:25.674685    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:25.674877    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:25.674877    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:25.674997    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:25.677798    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:25.677798    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:25.677798    4316 round_trippers.go:580]     Audit-Id: 632e6767-5a50-4eaf-b7aa-467bd2b002e1
	I0514 00:20:25.677798    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:25.677798    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:25.677798    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:25.678823    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:25.678823    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:25 GMT
	I0514 00:20:25.678968    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:26.176533    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:26.176655    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.176655    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.176655    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.181919    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:20:26.181919    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.181919    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.181919    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.181919    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.181919    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.181919    4316 round_trippers.go:580]     Audit-Id: d9c09ef0-8440-4dba-9ecc-5e59b4739c81
	I0514 00:20:26.181919    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.181919    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:26.676841    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:26.677235    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.677235    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.677235    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.681043    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:26.681043    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.681043    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.681043    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.681043    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.681043    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.681386    4316 round_trippers.go:580]     Audit-Id: 52a26556-6065-49f4-b55a-c9ccf246bee1
	I0514 00:20:26.681386    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.681722    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2028","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3932 chars]
	I0514 00:20:26.682380    4316 node_ready.go:49] node "multinode-101100-m02" has status "Ready":"True"
	I0514 00:20:26.682490    4316 node_ready.go:38] duration metric: took 6.017016s for node "multinode-101100-m02" to be "Ready" ...
	I0514 00:20:26.682490    4316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:20:26.682725    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:20:26.682725    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.682725    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.682725    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.690117    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:20:26.690117    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.690117    4316 round_trippers.go:580]     Audit-Id: afba9995-8927-4ba9-aca5-049f43a71e86
	I0514 00:20:26.690117    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.690117    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.690117    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.690117    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.690117    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.691742    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2031"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86160 chars]
	I0514 00:20:26.694767    4316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.695393    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:20:26.695393    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.695444    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.695444    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.697669    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.697669    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.697669    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.697669    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.697669    4316 round_trippers.go:580]     Audit-Id: 5f80650d-9d8a-413c-8296-41fb51db0810
	I0514 00:20:26.697669    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.697669    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.697669    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.698737    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0514 00:20:26.699401    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:26.699401    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.699500    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.699500    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.702063    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.702063    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.702063    4316 round_trippers.go:580]     Audit-Id: 4bcb1c3f-f4ab-41f0-bcb1-164cbd8354be
	I0514 00:20:26.702063    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.702063    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.702063    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.702063    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.702063    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.702449    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:26.702798    4316 pod_ready.go:92] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:26.702860    4316 pod_ready.go:81] duration metric: took 7.4951ms for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.702860    4316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.702927    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-101100
	I0514 00:20:26.702927    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.702927    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.702994    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.705361    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.705361    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.705361    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.705361    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.705361    4316 round_trippers.go:580]     Audit-Id: cd2f2035-d4c9-4f0f-ad29-1c24c05857e4
	I0514 00:20:26.705361    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.705361    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.705361    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.705906    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"74cd34fe-a56b-453d-afb3-a9db3db0d5ba","resourceVersion":"1779","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.122:2379","kubernetes.io/config.hash":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.mirror":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.seen":"2024-05-14T00:16:49.843121737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0514 00:20:26.705970    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:26.705970    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.705970    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.705970    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.708643    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.708643    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.708643    4316 round_trippers.go:580]     Audit-Id: 5e5f0078-02cf-4e35-af1f-329b3a2e82c5
	I0514 00:20:26.708643    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.708643    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.708643    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.708643    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.708643    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.708643    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:26.708643    4316 pod_ready.go:92] pod "etcd-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:26.708643    4316 pod_ready.go:81] duration metric: took 5.7829ms for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.708643    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.710079    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-101100
	I0514 00:20:26.710079    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.710079    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.710079    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.712127    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.712127    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.712127    4316 round_trippers.go:580]     Audit-Id: 29635718-3424-4556-b7b1-f7048c0ff12b
	I0514 00:20:26.712127    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.712127    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.712127    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.712127    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.712127    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.712127    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-101100","namespace":"kube-system","uid":"60889645-4c2d-4cfc-b322-c0f1b6e34503","resourceVersion":"1775","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.122:8443","kubernetes.io/config.hash":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.mirror":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.seen":"2024-05-14T00:16:49.778409853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0514 00:20:26.712127    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:26.712127    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.713235    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.713235    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.715268    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.715268    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.715595    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.715595    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.715595    4316 round_trippers.go:580]     Audit-Id: d75dec04-3818-4975-a61f-dbe1b34d57cb
	I0514 00:20:26.715595    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.715595    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.715635    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.715635    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:26.715635    4316 pod_ready.go:92] pod "kube-apiserver-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:26.715635    4316 pod_ready.go:81] duration metric: took 6.9916ms for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.715635    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.716239    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-101100
	I0514 00:20:26.716239    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.716279    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.716279    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.717886    4316 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0514 00:20:26.717886    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.717886    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.717886    4316 round_trippers.go:580]     Audit-Id: 0ee9a6e5-fc25-42b3-89ba-ad4b9bc32b3e
	I0514 00:20:26.717886    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.717886    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.717886    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.718738    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.718998    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-101100","namespace":"kube-system","uid":"1a74381a-7477-4fd3-b344-c4a230014f97","resourceVersion":"1752","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.mirror":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.seen":"2024-05-13T23:56:09.392106640Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0514 00:20:26.718998    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:26.719517    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.719517    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.719573    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.721694    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.722383    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.722447    4316 round_trippers.go:580]     Audit-Id: cca086da-6220-4532-9a39-cd003cd2256e
	I0514 00:20:26.722447    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.722447    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.722447    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.722447    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.722447    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.722447    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:26.722447    4316 pod_ready.go:92] pod "kube-controller-manager-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:26.722447    4316 pod_ready.go:81] duration metric: took 6.2858ms for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.722447    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.879504    4316 request.go:629] Waited for 156.1624ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:20:26.879504    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:20:26.879504    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.879504    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.879504    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.883220    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:26.883220    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.883220    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:27 GMT
	I0514 00:20:26.883220    4316 round_trippers.go:580]     Audit-Id: 31ca88a8-1afd-4794-a7dc-768dedd04973
	I0514 00:20:26.883220    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.883220    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.883220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.883220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.884206    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8zsgn","generateName":"kube-proxy-","namespace":"kube-system","uid":"af208cbd-fa8a-4822-9b19-dc30f63fa59c","resourceVersion":"1621","creationTimestamp":"2024-05-14T00:03:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0514 00:20:27.082955    4316 request.go:629] Waited for 198.1349ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:20:27.082955    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:20:27.082955    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:27.082955    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:27.082955    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:27.087392    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:27.087440    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:27.087440    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:27.087440    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:27.087440    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:27 GMT
	I0514 00:20:27.087440    4316 round_trippers.go:580]     Audit-Id: 09b30563-6a9e-4e45-81a3-ba9db26baa13
	I0514 00:20:27.087440    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:27.087440    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:27.087440    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"fd2d4a0b-dc97-4959-b2ba-0f51719ad2b3","resourceVersion":"1836","creationTimestamp":"2024-05-14T00:12:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_12_45_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:12:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0514 00:20:27.088084    4316 pod_ready.go:97] node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:20:27.088164    4316 pod_ready.go:81] duration metric: took 365.6932ms for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	E0514 00:20:27.088164    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:20:27.088164    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:27.286724    4316 request.go:629] Waited for 198.5478ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:20:27.286905    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:20:27.286905    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:27.286905    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:27.286905    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:27.290434    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:27.290434    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:27.290434    4316 round_trippers.go:580]     Audit-Id: 11e5a6ce-c5f5-4a8a-b5b2-e65b4e34c84c
	I0514 00:20:27.290434    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:27.290434    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:27.290434    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:27.290434    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:27.290434    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:27 GMT
	I0514 00:20:27.290900    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-b25hq","generateName":"kube-proxy-","namespace":"kube-system","uid":"d39f5818-3e88-4162-a7ce-734ca28103bf","resourceVersion":"2012","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5837 chars]
	I0514 00:20:27.487104    4316 request.go:629] Waited for 195.428ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:27.487104    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:27.487236    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:27.487236    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:27.487236    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:27.490649    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:27.491055    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:27.491055    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:27.491055    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:27.491055    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:27 GMT
	I0514 00:20:27.491055    4316 round_trippers.go:580]     Audit-Id: 7c1111cd-33b0-4052-8f89-f3f64bfbdf47
	I0514 00:20:27.491055    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:27.491055    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:27.491490    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2028","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3932 chars]
	I0514 00:20:27.491632    4316 pod_ready.go:92] pod "kube-proxy-b25hq" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:27.491632    4316 pod_ready.go:81] duration metric: took 403.4426ms for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:27.491632    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:27.690118    4316 request.go:629] Waited for 197.9417ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:20:27.690713    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:20:27.690713    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:27.690713    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:27.690713    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:27.702485    4316 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0514 00:20:27.702485    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:27.702485    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:27.702485    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:27.702485    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:27.702485    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:27.702485    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:27 GMT
	I0514 00:20:27.702485    4316 round_trippers.go:580]     Audit-Id: eb7f200a-9aed-42d0-8f92-a3053a93ae8f
	I0514 00:20:27.703212    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zhcz6","generateName":"kube-proxy-","namespace":"kube-system","uid":"a9a488af-41ba-47f3-87b0-5a2f062afad6","resourceVersion":"1732","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0514 00:20:27.877463    4316 request.go:629] Waited for 173.4471ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:27.877463    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:27.877587    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:27.877587    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:27.877587    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:27.882297    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:20:27.882297    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:27.882297    4316 round_trippers.go:580]     Audit-Id: d7d3e025-019f-44a9-9a52-bc5a3a24882d
	I0514 00:20:27.882297    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:27.882297    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:27.882297    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:27.882297    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:27.882297    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:28 GMT
	I0514 00:20:27.882297    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:27.883575    4316 pod_ready.go:92] pod "kube-proxy-zhcz6" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:27.883575    4316 pod_ready.go:81] duration metric: took 391.3861ms for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:27.883575    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:28.080394    4316 request.go:629] Waited for 196.8061ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:20:28.080613    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:20:28.080613    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:28.080613    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:28.080613    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:28.086458    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:20:28.086458    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:28.086458    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:28.086458    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:28 GMT
	I0514 00:20:28.086458    4316 round_trippers.go:580]     Audit-Id: 0fcc1969-0c8e-49e4-bb7a-ae562507ee61
	I0514 00:20:28.086458    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:28.086458    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:28.086458    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:28.086458    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-101100","namespace":"kube-system","uid":"d7300c2d-377f-4061-bd34-5f7593b7e827","resourceVersion":"1756","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.mirror":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.seen":"2024-05-13T23:56:09.392108241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0514 00:20:28.281620    4316 request.go:629] Waited for 194.481ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:28.281926    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:28.281926    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:28.281990    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:28.281990    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:28.288804    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:20:28.289345    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:28.289345    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:28 GMT
	I0514 00:20:28.289345    4316 round_trippers.go:580]     Audit-Id: beee54d5-4485-47b5-918d-8122b6f0e00b
	I0514 00:20:28.289457    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:28.289493    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:28.289531    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:28.289565    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:28.289926    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:28.290642    4316 pod_ready.go:92] pod "kube-scheduler-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:28.290642    4316 pod_ready.go:81] duration metric: took 407.0407ms for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:28.290748    4316 pod_ready.go:38] duration metric: took 1.6079386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:20:28.290805    4316 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 00:20:28.300837    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 00:20:28.322580    4316 system_svc.go:56] duration metric: took 31.7881ms WaitForService to wait for kubelet
	I0514 00:20:28.322580    4316 kubeadm.go:576] duration metric: took 7.9181778s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 00:20:28.323346    4316 node_conditions.go:102] verifying NodePressure condition ...
	I0514 00:20:28.485289    4316 request.go:629] Waited for 161.7138ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes
	I0514 00:20:28.485289    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes
	I0514 00:20:28.485289    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:28.485289    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:28.485289    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:28.489493    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:20:28.489493    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:28.489493    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:28.489493    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:28.489493    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:28.489493    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:28 GMT
	I0514 00:20:28.489493    4316 round_trippers.go:580]     Audit-Id: cbc88b87-5fbd-4db7-a59e-62381d76c441
	I0514 00:20:28.489493    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:28.490520    4316 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2034"},"items":[{"metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15489 chars]
	I0514 00:20:28.491575    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:20:28.491575    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:20:28.491575    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:20:28.491575    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:20:28.491575    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:20:28.491575    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:20:28.491575    4316 node_conditions.go:105] duration metric: took 168.2179ms to run NodePressure ...
	I0514 00:20:28.491575    4316 start.go:240] waiting for startup goroutines ...
	I0514 00:20:28.491688    4316 start.go:254] writing updated cluster config ...
	I0514 00:20:28.495940    4316 out.go:177] 
	I0514 00:20:28.498719    4316 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:20:28.506905    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:20:28.507068    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:20:28.513669    4316 out.go:177] * Starting "multinode-101100-m03" worker node in "multinode-101100" cluster
	I0514 00:20:28.517086    4316 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 00:20:28.517086    4316 cache.go:56] Caching tarball of preloaded images
	I0514 00:20:28.517889    4316 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0514 00:20:28.518037    4316 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0514 00:20:28.518258    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:20:28.521555    4316 start.go:360] acquireMachinesLock for multinode-101100-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0514 00:20:28.521623    4316 start.go:364] duration metric: took 68µs to acquireMachinesLock for "multinode-101100-m03"
	I0514 00:20:28.521785    4316 start.go:96] Skipping create...Using existing machine configuration
	I0514 00:20:28.521851    4316 fix.go:54] fixHost starting: m03
	I0514 00:20:28.522162    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:30.399299    4316 main.go:141] libmachine: [stdout =====>] : Off
	
	I0514 00:20:30.399299    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:30.399299    4316 fix.go:112] recreateIfNeeded on multinode-101100-m03: state=Stopped err=<nil>
	W0514 00:20:30.399374    4316 fix.go:138] unexpected machine state, will restart: <nil>
	I0514 00:20:30.401935    4316 out.go:177] * Restarting existing hyperv VM for "multinode-101100-m03" ...
	I0514 00:20:30.405567    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-101100-m03
	I0514 00:20:33.177006    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:20:33.177006    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:33.177006    4316 main.go:141] libmachine: Waiting for host to start...
	I0514 00:20:33.177089    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:35.181392    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:35.181392    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:35.181392    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:37.489532    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:20:37.490348    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:38.492807    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:40.424581    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:40.424581    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:40.424581    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:42.708894    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:20:42.708894    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:43.709651    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:45.696450    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:45.696450    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:45.696450    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:47.967696    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:20:47.967696    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:48.979385    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:50.995987    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:50.995987    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:50.996254    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:53.267989    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:20:53.267989    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:54.276705    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:56.240941    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:56.241739    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:56.241739    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:58.547415    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:20:58.547415    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:58.550805    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:00.416141    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:00.416141    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:00.416141    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:02.686191    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:02.686191    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:02.687104    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:21:02.689123    4316 machine.go:94] provisionDockerMachine start ...
	I0514 00:21:02.689123    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:04.570102    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:04.570102    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:04.570194    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:06.831811    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:06.831811    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:06.835674    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:06.836017    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:06.836017    4316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 00:21:06.976410    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0514 00:21:06.976410    4316 buildroot.go:166] provisioning hostname "multinode-101100-m03"
	I0514 00:21:06.976958    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:08.855652    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:08.855652    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:08.855652    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:11.080615    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:11.080615    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:11.084377    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:11.084940    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:11.084940    4316 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-101100-m03 && echo "multinode-101100-m03" | sudo tee /etc/hostname
	I0514 00:21:11.255633    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101100-m03
	
	I0514 00:21:11.255633    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:13.154922    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:13.154922    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:13.154922    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:15.398263    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:15.399017    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:15.402939    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:15.402939    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:15.402939    4316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-101100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-101100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-101100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 00:21:15.556115    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 00:21:15.556115    4316 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 00:21:15.556115    4316 buildroot.go:174] setting up certificates
	I0514 00:21:15.556115    4316 provision.go:84] configureAuth start
	I0514 00:21:15.556115    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:17.505754    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:17.505836    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:17.505836    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:19.771382    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:19.771604    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:19.771604    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:21.674514    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:21.675298    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:21.675298    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:23.945466    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:23.946417    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:23.946417    4316 provision.go:143] copyHostCerts
	I0514 00:21:23.946661    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0514 00:21:23.946894    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 00:21:23.946894    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 00:21:23.947291    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 00:21:23.948282    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0514 00:21:23.948520    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 00:21:23.948608    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 00:21:23.948879    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 00:21:23.949724    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0514 00:21:23.949966    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 00:21:23.950070    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 00:21:23.950193    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 00:21:23.951665    4316 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-101100-m03 san=[127.0.0.1 172.23.111.37 localhost minikube multinode-101100-m03]
	I0514 00:21:24.145321    4316 provision.go:177] copyRemoteCerts
	I0514 00:21:24.156296    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 00:21:24.156405    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:26.044598    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:26.045653    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:26.045728    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:28.305311    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:28.305311    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:28.305507    4316 sshutil.go:53] new ssh client: &{IP:172.23.111.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m03\id_rsa Username:docker}
	I0514 00:21:28.413951    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2573821s)
	I0514 00:21:28.413951    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0514 00:21:28.413951    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 00:21:28.456658    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0514 00:21:28.456658    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0514 00:21:28.500816    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0514 00:21:28.500816    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0514 00:21:28.545337    4316 provision.go:87] duration metric: took 12.9883902s to configureAuth
	I0514 00:21:28.545337    4316 buildroot.go:189] setting minikube options for container-runtime
	I0514 00:21:28.546226    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:21:28.546350    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:30.413910    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:30.413910    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:30.413910    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:32.654867    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:32.654867    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:32.658254    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:32.658845    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:32.658845    4316 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 00:21:32.802245    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 00:21:32.802245    4316 buildroot.go:70] root file system type: tmpfs
	I0514 00:21:32.802245    4316 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 00:21:32.802797    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:34.691259    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:34.691259    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:34.691341    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:36.944951    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:36.944951    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:36.948633    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:36.948633    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:36.949469    4316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.102.122"
	Environment="NO_PROXY=172.23.102.122,172.23.97.128"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 00:21:37.105736    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.102.122
	Environment=NO_PROXY=172.23.102.122,172.23.97.128
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 00:21:37.105736    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:38.987690    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:38.987690    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:38.987690    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:41.189935    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:41.189935    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:41.194292    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:41.194772    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:41.194772    4316 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 00:21:43.378819    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 00:21:43.378880    4316 machine.go:97] duration metric: took 40.6871503s to provisionDockerMachine
	I0514 00:21:43.378918    4316 start.go:293] postStartSetup for "multinode-101100-m03" (driver="hyperv")
	I0514 00:21:43.378918    4316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 00:21:43.387915    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 00:21:43.387915    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:45.259582    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:45.259582    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:45.260125    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:47.508138    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:47.508854    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:47.509144    4316 sshutil.go:53] new ssh client: &{IP:172.23.111.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m03\id_rsa Username:docker}
	I0514 00:21:47.621925    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2333264s)
	I0514 00:21:47.630787    4316 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 00:21:47.636687    4316 command_runner.go:130] > NAME=Buildroot
	I0514 00:21:47.636828    4316 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0514 00:21:47.636894    4316 command_runner.go:130] > ID=buildroot
	I0514 00:21:47.636956    4316 command_runner.go:130] > VERSION_ID=2023.02.9
	I0514 00:21:47.637013    4316 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0514 00:21:47.637193    4316 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 00:21:47.637247    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 00:21:47.637507    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 00:21:47.638144    4316 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 00:21:47.638144    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0514 00:21:47.647072    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 00:21:47.662813    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 00:21:47.705663    4316 start.go:296] duration metric: took 4.3264685s for postStartSetup
	I0514 00:21:47.705663    4316 fix.go:56] duration metric: took 1m19.1788045s for fixHost
	I0514 00:21:47.705770    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:49.581897    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:49.581897    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:49.581897    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:51.819389    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:51.819389    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:51.824010    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:51.824349    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:51.824416    4316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 00:21:51.954481    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715646112.184202835
	
	I0514 00:21:51.954481    4316 fix.go:216] guest clock: 1715646112.184202835
	I0514 00:21:51.954481    4316 fix.go:229] Guest: 2024-05-14 00:21:52.184202835 +0000 UTC Remote: 2024-05-14 00:21:47.7056639 +0000 UTC m=+411.614762401 (delta=4.478538935s)
	I0514 00:21:51.954481    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:53.836606    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:53.836606    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:53.836606    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:56.092057    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:56.092753    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:56.096518    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:56.096589    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:56.096589    4316 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715646111
	I0514 00:21:56.248205    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 00:21:51 UTC 2024
	
	I0514 00:21:56.249225    4316 fix.go:236] clock set: Tue May 14 00:21:51 UTC 2024
	 (err=<nil>)
	I0514 00:21:56.249225    4316 start.go:83] releasing machines lock for "multinode-101100-m03", held for 1m27.7219102s
	I0514 00:21:56.249225    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:58.121332    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:58.121332    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:58.122089    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:00.351479    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:22:00.352302    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:00.355130    4316 out.go:177] * Found network options:
	I0514 00:22:00.357987    4316 out.go:177]   - NO_PROXY=172.23.102.122,172.23.97.128
	W0514 00:22:00.360358    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	W0514 00:22:00.360358    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	I0514 00:22:00.362628    4316 out.go:177]   - NO_PROXY=172.23.102.122,172.23.97.128
	W0514 00:22:00.364886    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	W0514 00:22:00.364886    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	W0514 00:22:00.366343    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	W0514 00:22:00.366343    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	I0514 00:22:00.367654    4316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 00:22:00.367654    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:22:00.375693    4316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0514 00:22:00.375693    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:22:02.356924    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:02.357124    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:02.357218    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:02.362219    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:02.362219    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:02.362756    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:04.713971    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:22:04.713971    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:04.714432    4316 sshutil.go:53] new ssh client: &{IP:172.23.111.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m03\id_rsa Username:docker}
	I0514 00:22:04.735525    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:22:04.735934    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:04.736319    4316 sshutil.go:53] new ssh client: &{IP:172.23.111.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m03\id_rsa Username:docker}
	I0514 00:22:04.809488    4316 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0514 00:22:04.810145    4316 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4340992s)
	W0514 00:22:04.810145    4316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 00:22:04.818898    4316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 00:22:04.886656    4316 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0514 00:22:04.886826    4316 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5188182s)
	I0514 00:22:04.886842    4316 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0514 00:22:04.886953    4316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 00:22:04.886953    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:22:04.887296    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:22:04.921794    4316 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0514 00:22:04.931734    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 00:22:04.963270    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 00:22:04.986530    4316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 00:22:04.999807    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 00:22:05.029380    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:22:05.058352    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 00:22:05.083622    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:22:05.112998    4316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 00:22:05.142933    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 00:22:05.171495    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 00:22:05.198510    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 00:22:05.224684    4316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 00:22:05.241590    4316 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0514 00:22:05.251440    4316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 00:22:05.277900    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:05.461282    4316 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 00:22:05.490623    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:22:05.500207    4316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 00:22:05.523447    4316 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0514 00:22:05.523447    4316 command_runner.go:130] > [Unit]
	I0514 00:22:05.523447    4316 command_runner.go:130] > Description=Docker Application Container Engine
	I0514 00:22:05.523447    4316 command_runner.go:130] > Documentation=https://docs.docker.com
	I0514 00:22:05.523447    4316 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0514 00:22:05.523447    4316 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0514 00:22:05.523447    4316 command_runner.go:130] > StartLimitBurst=3
	I0514 00:22:05.523447    4316 command_runner.go:130] > StartLimitIntervalSec=60
	I0514 00:22:05.523447    4316 command_runner.go:130] > [Service]
	I0514 00:22:05.523447    4316 command_runner.go:130] > Type=notify
	I0514 00:22:05.523447    4316 command_runner.go:130] > Restart=on-failure
	I0514 00:22:05.523447    4316 command_runner.go:130] > Environment=NO_PROXY=172.23.102.122
	I0514 00:22:05.523447    4316 command_runner.go:130] > Environment=NO_PROXY=172.23.102.122,172.23.97.128
	I0514 00:22:05.523447    4316 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0514 00:22:05.523447    4316 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0514 00:22:05.523447    4316 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0514 00:22:05.523447    4316 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0514 00:22:05.523447    4316 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0514 00:22:05.523447    4316 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0514 00:22:05.523447    4316 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0514 00:22:05.523447    4316 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0514 00:22:05.523447    4316 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0514 00:22:05.523447    4316 command_runner.go:130] > ExecStart=
	I0514 00:22:05.523447    4316 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0514 00:22:05.524447    4316 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0514 00:22:05.524447    4316 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0514 00:22:05.524447    4316 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0514 00:22:05.524447    4316 command_runner.go:130] > LimitNOFILE=infinity
	I0514 00:22:05.524447    4316 command_runner.go:130] > LimitNPROC=infinity
	I0514 00:22:05.524447    4316 command_runner.go:130] > LimitCORE=infinity
	I0514 00:22:05.524447    4316 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0514 00:22:05.524447    4316 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0514 00:22:05.524447    4316 command_runner.go:130] > TasksMax=infinity
	I0514 00:22:05.524447    4316 command_runner.go:130] > TimeoutStartSec=0
	I0514 00:22:05.524447    4316 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0514 00:22:05.524447    4316 command_runner.go:130] > Delegate=yes
	I0514 00:22:05.524447    4316 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0514 00:22:05.524447    4316 command_runner.go:130] > KillMode=process
	I0514 00:22:05.524447    4316 command_runner.go:130] > [Install]
	I0514 00:22:05.524447    4316 command_runner.go:130] > WantedBy=multi-user.target
	I0514 00:22:05.533931    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:22:05.567981    4316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 00:22:05.603770    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:22:05.637643    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:22:05.669362    4316 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 00:22:05.728890    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:22:05.756769    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:22:05.798538    4316 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0514 00:22:05.807501    4316 ssh_runner.go:195] Run: which cri-dockerd
	I0514 00:22:05.813646    4316 command_runner.go:130] > /usr/bin/cri-dockerd
	I0514 00:22:05.821747    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 00:22:05.838769    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 00:22:05.879429    4316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 00:22:06.061305    4316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 00:22:06.245852    4316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 00:22:06.245965    4316 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 00:22:06.287299    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:06.473998    4316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 00:22:09.055475    4316 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5813119s)
	I0514 00:22:09.066661    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 00:22:09.097427    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:22:09.129009    4316 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 00:22:09.311080    4316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 00:22:09.498124    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:09.671539    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 00:22:09.706431    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:22:09.736219    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:09.922310    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 00:22:10.020923    4316 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 00:22:10.030714    4316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 00:22:10.038675    4316 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0514 00:22:10.038675    4316 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0514 00:22:10.038675    4316 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0514 00:22:10.038675    4316 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0514 00:22:10.038675    4316 command_runner.go:130] > Access: 2024-05-14 00:22:10.177592384 +0000
	I0514 00:22:10.038675    4316 command_runner.go:130] > Modify: 2024-05-14 00:22:10.177592384 +0000
	I0514 00:22:10.038675    4316 command_runner.go:130] > Change: 2024-05-14 00:22:10.181592534 +0000
	I0514 00:22:10.038675    4316 command_runner.go:130] >  Birth: -
	I0514 00:22:10.038675    4316 start.go:562] Will wait 60s for crictl version
	I0514 00:22:10.045705    4316 ssh_runner.go:195] Run: which crictl
	I0514 00:22:10.052082    4316 command_runner.go:130] > /usr/bin/crictl
	I0514 00:22:10.061346    4316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 00:22:10.113065    4316 command_runner.go:130] > Version:  0.1.0
	I0514 00:22:10.113065    4316 command_runner.go:130] > RuntimeName:  docker
	I0514 00:22:10.113156    4316 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0514 00:22:10.113156    4316 command_runner.go:130] > RuntimeApiVersion:  v1
	I0514 00:22:10.113214    4316 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 00:22:10.122534    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:22:10.161688    4316 command_runner.go:130] > 26.0.2
	I0514 00:22:10.167681    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:22:10.196989    4316 command_runner.go:130] > 26.0.2
	I0514 00:22:10.199755    4316 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 00:22:10.203718    4316 out.go:177]   - env NO_PROXY=172.23.102.122
	I0514 00:22:10.205749    4316 out.go:177]   - env NO_PROXY=172.23.102.122,172.23.97.128
	I0514 00:22:10.207617    4316 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 00:22:10.211419    4316 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 00:22:10.211419    4316 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 00:22:10.211419    4316 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 00:22:10.211419    4316 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 00:22:10.214411    4316 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 00:22:10.214411    4316 ip.go:210] interface addr: 172.23.96.1/20
	I0514 00:22:10.223817    4316 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 00:22:10.229778    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:22:10.249542    4316 mustload.go:65] Loading cluster: multinode-101100
	I0514 00:22:10.249992    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:22:10.250906    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:22:12.127984    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:12.128682    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:12.128682    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:22:12.129430    4316 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100 for IP: 172.23.111.37
	I0514 00:22:12.129430    4316 certs.go:194] generating shared ca certs ...
	I0514 00:22:12.129430    4316 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:22:12.129952    4316 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 00:22:12.130258    4316 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 00:22:12.130346    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0514 00:22:12.130537    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0514 00:22:12.130697    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0514 00:22:12.130723    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0514 00:22:12.131165    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 00:22:12.131440    4316 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 00:22:12.131513    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 00:22:12.131741    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 00:22:12.131893    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 00:22:12.132122    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 00:22:12.132470    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 00:22:12.132586    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:22:12.132745    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0514 00:22:12.132822    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0514 00:22:12.133041    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 00:22:12.184529    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 00:22:12.244756    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 00:22:12.297173    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 00:22:12.345941    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 00:22:12.391896    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 00:22:12.434600    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 00:22:12.492171    4316 ssh_runner.go:195] Run: openssl version
	I0514 00:22:12.501302    4316 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0514 00:22:12.511793    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 00:22:12.536786    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 00:22:12.543890    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:22:12.543972    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:22:12.553635    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 00:22:12.561375    4316 command_runner.go:130] > 3ec20f2e
	I0514 00:22:12.569818    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 00:22:12.597665    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 00:22:12.622930    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:22:12.629962    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:22:12.630044    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:22:12.642059    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:22:12.650748    4316 command_runner.go:130] > b5213941
	I0514 00:22:12.661540    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 00:22:12.690067    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 00:22:12.716662    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 00:22:12.724120    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:22:12.724288    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:22:12.733760    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 00:22:12.741790    4316 command_runner.go:130] > 51391683
	I0514 00:22:12.750627    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 00:22:12.776716    4316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 00:22:12.783486    4316 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 00:22:12.783486    4316 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 00:22:12.783486    4316 kubeadm.go:928] updating node {m03 172.23.111.37 0 v1.30.0  false true} ...
	I0514 00:22:12.784100    4316 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-101100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.111.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0514 00:22:12.792376    4316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 00:22:12.809150    4316 command_runner.go:130] > kubeadm
	I0514 00:22:12.809150    4316 command_runner.go:130] > kubectl
	I0514 00:22:12.809150    4316 command_runner.go:130] > kubelet
	I0514 00:22:12.809150    4316 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 00:22:12.818264    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0514 00:22:12.837354    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0514 00:22:12.869525    4316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 00:22:12.907971    4316 ssh_runner.go:195] Run: grep 172.23.102.122	control-plane.minikube.internal$ /etc/hosts
	I0514 00:22:12.914521    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.102.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:22:12.941381    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:13.133158    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:22:13.161650    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:22:13.162414    4316 start.go:316] joinCluster: &{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.122 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.111.37 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 00:22:13.162414    4316 start.go:329] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.23.111.37 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0514 00:22:13.162414    4316 host.go:66] Checking if "multinode-101100-m03" exists ...
	I0514 00:22:13.163191    4316 mustload.go:65] Loading cluster: multinode-101100
	I0514 00:22:13.163628    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:22:13.164073    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:22:15.084048    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:15.084048    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:15.085030    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:22:15.085491    4316 api_server.go:166] Checking apiserver status ...
	I0514 00:22:15.093395    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:22:15.093395    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:22:17.036257    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:17.036447    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:17.036527    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:19.326564    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:22:19.327171    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:19.327171    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:22:19.429259    4316 command_runner.go:130] > 1838
	I0514 00:22:19.429396    4316 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.3355874s)
	I0514 00:22:19.437807    4316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1838/cgroup
	W0514 00:22:19.458736    4316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1838/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0514 00:22:19.467719    4316 ssh_runner.go:195] Run: ls
	I0514 00:22:19.474727    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:22:19.481611    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 200:
	ok
	I0514 00:22:19.490951    4316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-101100-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0514 00:22:19.642298    4316 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-tfbt8, kube-system/kube-proxy-8zsgn
	I0514 00:22:19.643826    4316 command_runner.go:130] > node/multinode-101100-m03 cordoned
	I0514 00:22:19.644549    4316 command_runner.go:130] > node/multinode-101100-m03 drained
	I0514 00:22:19.644717    4316 node.go:128] successfully drained node "multinode-101100-m03"
	I0514 00:22:19.644717    4316 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0514 00:22:19.644848    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:22:21.533290    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:21.533369    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:21.533369    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:23.781215    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:22:23.781215    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:23.782328    4316 sshutil.go:53] new ssh client: &{IP:172.23.111.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m03\id_rsa Username:docker}
	I0514 00:22:24.169698    4316 command_runner.go:130] ! W0514 00:22:24.402117    1486 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0514 00:22:24.530628    4316 command_runner.go:130] > [preflight] Running pre-flight checks
	I0514 00:22:24.530679    4316 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0514 00:22:24.530679    4316 command_runner.go:130] > [reset] Stopping the kubelet service
	I0514 00:22:24.530719    4316 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0514 00:22:24.530719    4316 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0514 00:22:24.530751    4316 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0514 00:22:24.530751    4316 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0514 00:22:24.530751    4316 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0514 00:22:24.530801    4316 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0514 00:22:24.530801    4316 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0514 00:22:24.530801    4316 command_runner.go:130] > to reset your system's IPVS tables.
	I0514 00:22:24.530801    4316 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0514 00:22:24.530801    4316 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0514 00:22:24.530801    4316 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.8856942s)
	I0514 00:22:24.530995    4316 node.go:155] successfully reset node "multinode-101100-m03"
	I0514 00:22:24.531797    4316 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:22:24.532444    4316 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.102.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 00:22:24.533198    4316 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0514 00:22:24.533263    4316 round_trippers.go:463] DELETE https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:24.533263    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:24.533263    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:24.533326    4316 round_trippers.go:473]     Content-Type: application/json
	I0514 00:22:24.533326    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:24.550241    4316 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0514 00:22:24.550241    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:24.550241    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:24.550241    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:24.550241    4316 round_trippers.go:580]     Content-Length: 171
	I0514 00:22:24.550241    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:24 GMT
	I0514 00:22:24.550241    4316 round_trippers.go:580]     Audit-Id: a88d2b44-64bb-4987-a7d0-c03092b9e2e3
	I0514 00:22:24.550241    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:24.550241    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:24.550241    4316 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-101100-m03","kind":"nodes","uid":"fd2d4a0b-dc97-4959-b2ba-0f51719ad2b3"}}
	I0514 00:22:24.550840    4316 node.go:180] successfully deleted node "multinode-101100-m03"
	I0514 00:22:24.550840    4316 start.go:333] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.23.111.37 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0514 00:22:24.550930    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0514 00:22:24.551007    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:22:26.445965    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:26.445965    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:26.446918    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:28.699598    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:22:28.699598    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:28.699598    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:22:28.886585    4316 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token j355gq.bh21j9sltd7tgxsw --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0514 00:22:28.886585    4316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3353782s)
	I0514 00:22:28.887584    4316 start.go:342] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.23.111.37 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0514 00:22:28.887584    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j355gq.bh21j9sltd7tgxsw --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-101100-m03"
	I0514 00:22:29.086610    4316 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0514 00:22:30.422942    4316 command_runner.go:130] > [preflight] Running pre-flight checks
	I0514 00:22:30.423024    4316 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0514 00:22:30.423024    4316 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0514 00:22:30.423024    4316 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0514 00:22:30.423024    4316 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0514 00:22:30.423024    4316 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0514 00:22:30.423138    4316 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0514 00:22:30.423138    4316 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002661537s
	I0514 00:22:30.423138    4316 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0514 00:22:30.423138    4316 command_runner.go:130] > This node has joined the cluster:
	I0514 00:22:30.423211    4316 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0514 00:22:30.423211    4316 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0514 00:22:30.423273    4316 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0514 00:22:30.423273    4316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j355gq.bh21j9sltd7tgxsw --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-101100-m03": (1.5355913s)
	I0514 00:22:30.423360    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0514 00:22:30.625570    4316 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0514 00:22:30.829669    4316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-101100-m03 minikube.k8s.io/updated_at=2024_05_14T00_22_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=multinode-101100 minikube.k8s.io/primary=false
	I0514 00:22:30.962568    4316 command_runner.go:130] > node/multinode-101100-m03 labeled
	I0514 00:22:30.962696    4316 start.go:318] duration metric: took 17.7991448s to joinCluster
	I0514 00:22:30.963023    4316 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.23.111.37 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0514 00:22:30.966858    4316 out.go:177] * Verifying Kubernetes components...
	I0514 00:22:30.963921    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:22:30.977741    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:31.178666    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:22:31.205179    4316 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:22:31.206161    4316 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.102.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 00:22:31.207253    4316 node_ready.go:35] waiting up to 6m0s for node "multinode-101100-m03" to be "Ready" ...
	I0514 00:22:31.207253    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:31.207253    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:31.207253    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:31.207253    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:31.213710    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:22:31.213710    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:31.214680    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:31.214680    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:31.214680    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:31 GMT
	I0514 00:22:31.214680    4316 round_trippers.go:580]     Audit-Id: 5fc9ab20-804d-4d36-8ac1-22507b3fd9e3
	I0514 00:22:31.214680    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:31.214680    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:31.214680    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2181","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3396 chars]
	I0514 00:22:31.722426    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:31.722426    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:31.722481    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:31.722481    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:31.724955    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:31.724955    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:31.724955    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:31.725671    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:31.725671    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:31 GMT
	I0514 00:22:31.725671    4316 round_trippers.go:580]     Audit-Id: eafaf302-743a-4936-b61e-b6eb0ae95a14
	I0514 00:22:31.725671    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:31.725671    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:31.726110    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2181","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3396 chars]
	I0514 00:22:32.210573    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:32.210638    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:32.210638    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:32.210638    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:32.216109    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:22:32.216109    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:32.216109    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:32.216109    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:32.216109    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:32.216109    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:32 GMT
	I0514 00:22:32.216109    4316 round_trippers.go:580]     Audit-Id: 88352a26-6350-4fb6-904a-cd30eeb911b9
	I0514 00:22:32.216109    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:32.216827    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2181","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3396 chars]
	I0514 00:22:32.713324    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:32.713324    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:32.713324    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:32.713324    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:32.720032    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:22:32.720032    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:32.720032    4316 round_trippers.go:580]     Audit-Id: 559f72a3-3e52-4bac-9e0f-ec11ed30a4f2
	I0514 00:22:32.720032    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:32.720032    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:32.720032    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:32.720032    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:32.720032    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:32 GMT
	I0514 00:22:32.720735    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2181","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3396 chars]
	I0514 00:22:33.218838    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:33.218930    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:33.218952    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:33.218952    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:33.221523    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:33.221523    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:33.221523    4316 round_trippers.go:580]     Audit-Id: 7d54da2c-5ce5-4046-a307-f3e8aaec8f56
	I0514 00:22:33.221523    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:33.221523    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:33.221523    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:33.221523    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:33.221523    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:33 GMT
	I0514 00:22:33.221523    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2190","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3505 chars]
	I0514 00:22:33.221523    4316 node_ready.go:53] node "multinode-101100-m03" has status "Ready":"False"
	I0514 00:22:33.723609    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:33.723717    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:33.723796    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:33.723796    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:33.727553    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:33.727668    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:33.727723    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:33 GMT
	I0514 00:22:33.727723    4316 round_trippers.go:580]     Audit-Id: 1cfb0054-dddf-43df-8341-d8c807f9aa61
	I0514 00:22:33.727723    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:33.727723    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:33.727762    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:33.727762    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:33.727889    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2190","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3505 chars]
	I0514 00:22:34.208454    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:34.208454    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:34.208454    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:34.208454    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:34.214189    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:22:34.214189    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:34.214189    4316 round_trippers.go:580]     Audit-Id: 45699a1f-eb0b-40d6-ba20-a075773242c7
	I0514 00:22:34.214189    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:34.214189    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:34.214189    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:34.214189    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:34.214189    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:34 GMT
	I0514 00:22:34.214717    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2190","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3505 chars]
	I0514 00:22:34.708540    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:34.708763    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:34.708763    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:34.708763    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:34.712480    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:34.712480    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:34.712480    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:34.712480    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:34 GMT
	I0514 00:22:34.712480    4316 round_trippers.go:580]     Audit-Id: 67bedaf4-a410-48e5-86f6-c8d0307f2a0e
	I0514 00:22:34.712480    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:34.712480    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:34.712480    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:34.712480    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2190","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3505 chars]
	I0514 00:22:35.222577    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:35.222577    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.222684    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.222684    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.225748    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.225748    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.225748    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.226096    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.226096    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.226096    4316 round_trippers.go:580]     Audit-Id: d38347b4-a927-45e5-ba00-b0a03178f484
	I0514 00:22:35.226096    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.226096    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.226223    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2204","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3763 chars]
	I0514 00:22:35.226533    4316 node_ready.go:49] node "multinode-101100-m03" has status "Ready":"True"
	I0514 00:22:35.226654    4316 node_ready.go:38] duration metric: took 4.0191445s for node "multinode-101100-m03" to be "Ready" ...
	I0514 00:22:35.226654    4316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:22:35.226777    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:22:35.226777    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.226777    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.226777    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.231623    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:22:35.231623    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.231623    4316 round_trippers.go:580]     Audit-Id: d3610563-9ebf-47da-acc9-11fb4e5a3dd4
	I0514 00:22:35.231623    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.231694    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.231694    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.231694    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.231694    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.233208    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2204"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85700 chars]
	I0514 00:22:35.236507    4316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.236507    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:22:35.236507    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.236507    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.236507    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.239097    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:35.240094    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.240115    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.240115    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.240115    4316 round_trippers.go:580]     Audit-Id: 0674e198-bf3c-4b75-aa06-6aa2baa1467b
	I0514 00:22:35.240115    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.240115    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.240115    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.240175    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0514 00:22:35.240175    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:35.240175    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.240175    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.240175    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.243286    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.243286    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.243286    4316 round_trippers.go:580]     Audit-Id: 085d45b6-d3c1-45cd-a1c1-f640176b3b92
	I0514 00:22:35.243286    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.243286    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.243286    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.243286    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.243286    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.244257    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:35.244257    4316 pod_ready.go:92] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:35.244257    4316 pod_ready.go:81] duration metric: took 7.7488ms for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.244257    4316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.244257    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-101100
	I0514 00:22:35.244257    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.244257    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.244257    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.247142    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:35.247142    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.247142    4316 round_trippers.go:580]     Audit-Id: e4f6db5d-1943-416a-b87a-c378d4270193
	I0514 00:22:35.247142    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.247334    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.247334    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.247334    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.247334    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.247493    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"74cd34fe-a56b-453d-afb3-a9db3db0d5ba","resourceVersion":"1779","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.122:2379","kubernetes.io/config.hash":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.mirror":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.seen":"2024-05-14T00:16:49.843121737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0514 00:22:35.247942    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:35.248003    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.248003    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.248003    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.251118    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.251118    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.251118    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.251118    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.251118    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.251230    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.251230    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.251230    4316 round_trippers.go:580]     Audit-Id: 90d8dde8-8ccd-4894-a935-03e55fb5d5c0
	I0514 00:22:35.252674    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:35.253138    4316 pod_ready.go:92] pod "etcd-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:35.253171    4316 pod_ready.go:81] duration metric: took 8.8806ms for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.253171    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.253278    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-101100
	I0514 00:22:35.253311    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.253311    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.253311    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.257363    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:22:35.257363    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.257363    4316 round_trippers.go:580]     Audit-Id: 86901156-fbfa-45ec-bee4-58bd5f849dd7
	I0514 00:22:35.257363    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.257363    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.257363    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.257363    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.257363    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.257363    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-101100","namespace":"kube-system","uid":"60889645-4c2d-4cfc-b322-c0f1b6e34503","resourceVersion":"1775","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.122:8443","kubernetes.io/config.hash":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.mirror":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.seen":"2024-05-14T00:16:49.778409853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0514 00:22:35.259276    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:35.259276    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.259276    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.259276    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.261298    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:35.261298    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.261298    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.261298    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.261298    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.261298    4316 round_trippers.go:580]     Audit-Id: 88f7d8b2-32c3-472f-a6d0-56c97edff491
	I0514 00:22:35.261298    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.261298    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.261298    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:35.262290    4316 pod_ready.go:92] pod "kube-apiserver-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:35.262290    4316 pod_ready.go:81] duration metric: took 9.1183ms for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.262290    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.262290    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-101100
	I0514 00:22:35.262290    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.262290    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.262290    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.265590    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.265590    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.265590    4316 round_trippers.go:580]     Audit-Id: 9d8976cb-6f02-4632-9976-dab069dbc7d6
	I0514 00:22:35.265590    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.265590    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.265590    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.265590    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.265590    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.265590    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-101100","namespace":"kube-system","uid":"1a74381a-7477-4fd3-b344-c4a230014f97","resourceVersion":"1752","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.mirror":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.seen":"2024-05-13T23:56:09.392106640Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0514 00:22:35.266396    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:35.266396    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.266396    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.266396    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.268170    4316 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0514 00:22:35.268170    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.268170    4316 round_trippers.go:580]     Audit-Id: 1bee75b3-a93d-4f96-b61c-47facc6def52
	I0514 00:22:35.268170    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.268170    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.268170    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.268170    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.268170    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.268170    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:35.268170    4316 pod_ready.go:92] pod "kube-controller-manager-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:35.268170    4316 pod_ready.go:81] duration metric: took 5.8799ms for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.268170    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.428006    4316 request.go:629] Waited for 158.6721ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:22:35.428385    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:22:35.428418    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.428461    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.428461    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.432383    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.432383    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.432383    4316 round_trippers.go:580]     Audit-Id: 72020528-bfeb-44ba-8bb6-c52684e32a80
	I0514 00:22:35.432383    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.432383    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.432383    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.432454    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.432454    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.433049    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8zsgn","generateName":"kube-proxy-","namespace":"kube-system","uid":"af208cbd-fa8a-4822-9b19-dc30f63fa59c","resourceVersion":"2194","creationTimestamp":"2024-05-14T00:03:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5837 chars]
	I0514 00:22:35.629181    4316 request.go:629] Waited for 195.0781ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:35.629499    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:35.629499    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.629499    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.629499    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.632999    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.632999    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.632999    4316 round_trippers.go:580]     Audit-Id: 6eaa92e9-46ec-48a7-827b-273470d0a01c
	I0514 00:22:35.632999    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.632999    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.632999    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.632999    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.632999    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.632999    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2204","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3763 chars]
	I0514 00:22:35.633644    4316 pod_ready.go:92] pod "kube-proxy-8zsgn" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:35.633644    4316 pod_ready.go:81] duration metric: took 365.4504ms for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.633644    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.832685    4316 request.go:629] Waited for 198.9615ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:22:35.832685    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:22:35.832685    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.832685    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.832685    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.835474    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:35.835474    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.836471    4316 round_trippers.go:580]     Audit-Id: d2783015-c132-4022-b205-8cb8470c898b
	I0514 00:22:35.836471    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.836471    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.836471    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.836471    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.836471    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:36 GMT
	I0514 00:22:35.836522    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-b25hq","generateName":"kube-proxy-","namespace":"kube-system","uid":"d39f5818-3e88-4162-a7ce-734ca28103bf","resourceVersion":"2012","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5837 chars]
	I0514 00:22:36.034928    4316 request.go:629] Waited for 197.5426ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:22:36.035422    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:22:36.035422    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:36.035422    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:36.035422    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:36.041311    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:22:36.041311    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:36.041311    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:36.041311    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:36 GMT
	I0514 00:22:36.041311    4316 round_trippers.go:580]     Audit-Id: 54799025-00b5-43de-8af8-02c05f6b1665
	I0514 00:22:36.041311    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:36.041311    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:36.041311    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:36.042063    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2032","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3812 chars]
	I0514 00:22:36.042620    4316 pod_ready.go:92] pod "kube-proxy-b25hq" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:36.042728    4316 pod_ready.go:81] duration metric: took 409.058ms for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:36.042769    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:36.222713    4316 request.go:629] Waited for 179.8274ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:22:36.226296    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:22:36.226296    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:36.226296    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:36.226296    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:36.232730    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:22:36.232730    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:36.232730    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:36.232730    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:36.232730    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:36.232730    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:36 GMT
	I0514 00:22:36.232730    4316 round_trippers.go:580]     Audit-Id: 6d25e29d-d450-417d-84fb-2e2822e042d8
	I0514 00:22:36.232730    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:36.232907    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zhcz6","generateName":"kube-proxy-","namespace":"kube-system","uid":"a9a488af-41ba-47f3-87b0-5a2f062afad6","resourceVersion":"1732","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0514 00:22:36.425650    4316 request.go:629] Waited for 191.7272ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:36.425650    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:36.425650    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:36.425650    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:36.425650    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:36.429348    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:36.429348    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:36.429668    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:36.429668    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:36.429668    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:36 GMT
	I0514 00:22:36.429668    4316 round_trippers.go:580]     Audit-Id: 6d69090f-b253-4afe-892d-6ba1e2ebf425
	I0514 00:22:36.429668    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:36.429668    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:36.430257    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:36.431007    4316 pod_ready.go:92] pod "kube-proxy-zhcz6" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:36.431092    4316 pod_ready.go:81] duration metric: took 388.2655ms for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:36.431092    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:36.630031    4316 request.go:629] Waited for 198.7442ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:22:36.630621    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:22:36.630621    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:36.630621    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:36.630829    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:36.634252    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:36.634716    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:36.634716    4316 round_trippers.go:580]     Audit-Id: c02198a9-1730-432f-bf91-5260c5f2b16b
	I0514 00:22:36.634716    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:36.634716    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:36.634716    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:36.634716    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:36.634826    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:36 GMT
	I0514 00:22:36.635201    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-101100","namespace":"kube-system","uid":"d7300c2d-377f-4061-bd34-5f7593b7e827","resourceVersion":"1756","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.mirror":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.seen":"2024-05-13T23:56:09.392108241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0514 00:22:36.831672    4316 request.go:629] Waited for 195.5902ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:36.832176    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:36.832255    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:36.832327    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:36.832327    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:36.835655    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:36.836206    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:36.836206    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:37 GMT
	I0514 00:22:36.836206    4316 round_trippers.go:580]     Audit-Id: 641059ea-6761-4f4f-8867-f47b2d8b3932
	I0514 00:22:36.836206    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:36.836206    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:36.836206    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:36.836206    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:36.836421    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:36.836916    4316 pod_ready.go:92] pod "kube-scheduler-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:36.837020    4316 pod_ready.go:81] duration metric: took 405.8799ms for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:36.837020    4316 pod_ready.go:38] duration metric: took 1.6102629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:22:36.837020    4316 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 00:22:36.845708    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 00:22:36.872997    4316 system_svc.go:56] duration metric: took 35.9749ms WaitForService to wait for kubelet
	I0514 00:22:36.873132    4316 kubeadm.go:576] duration metric: took 5.9096231s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 00:22:36.873196    4316 node_conditions.go:102] verifying NodePressure condition ...
	I0514 00:22:37.034158    4316 request.go:629] Waited for 160.867ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes
	I0514 00:22:37.034398    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes
	I0514 00:22:37.034398    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:37.034482    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:37.034482    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:37.037224    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:37.038199    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:37.038199    4316 round_trippers.go:580]     Audit-Id: 0bba16d2-0dff-472f-9e7a-5eb6c7dd1a4d
	I0514 00:22:37.038199    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:37.038199    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:37.038199    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:37.038199    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:37.038199    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:37 GMT
	I0514 00:22:37.038555    4316 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2206"},"items":[{"metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14852 chars]
	I0514 00:22:37.039449    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:22:37.039530    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:22:37.039530    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:22:37.039530    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:22:37.039530    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:22:37.039530    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:22:37.039530    4316 node_conditions.go:105] duration metric: took 166.3233ms to run NodePressure ...
	I0514 00:22:37.039530    4316 start.go:240] waiting for startup goroutines ...
	I0514 00:22:37.039618    4316 start.go:254] writing updated cluster config ...
	I0514 00:22:37.048078    4316 ssh_runner.go:195] Run: rm -f paused
	I0514 00:22:37.170898    4316 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0514 00:22:37.173942    4316 out.go:177] * Done! kubectl is now configured to use "multinode-101100" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:06 multinode-101100 dockerd[1049]: 2024/05/14 00:18:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:06 multinode-101100 dockerd[1049]: 2024/05/14 00:18:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:11 multinode-101100 dockerd[1049]: 2024/05/14 00:18:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:11 multinode-101100 dockerd[1049]: 2024/05/14 00:18:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:11 multinode-101100 dockerd[1049]: 2024/05/14 00:18:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:11 multinode-101100 dockerd[1049]: 2024/05/14 00:18:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:11 multinode-101100 dockerd[1049]: 2024/05/14 00:18:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d0b2f0362eb4       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   8cb9b6d6d0915       busybox-fc5497c4f-xqj6w
	dcc5a109288b6       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   1cccb5e8cee3b       coredns-7db6d8ff4d-4kmx4
	bde84ba2d4ed7       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   468a0e2976ae4       storage-provisioner
	2b424a7cd98c8       4950bb10b3f87                                                                                         5 minutes ago       Running             kindnet-cni               2                   5233e076edceb       kindnet-9q2tv
	b7d8d9a5e5eaf       4950bb10b3f87                                                                                         6 minutes ago       Exited              kindnet-cni               1                   5233e076edceb       kindnet-9q2tv
	b142687b621f1       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       1                   468a0e2976ae4       storage-provisioner
	b2a1b31cd7dee       a0bf559e280cf                                                                                         6 minutes ago       Running             kube-proxy                1                   a8ac60a565998       kube-proxy-zhcz6
	08450c853590d       3861cfcd7c04c                                                                                         6 minutes ago       Running             etcd                      0                   419648c0d4053       etcd-multinode-101100
	da9e6534cd87d       c42f13656d0b2                                                                                         6 minutes ago       Running             kube-apiserver            0                   509b8407e0955       kube-apiserver-multinode-101100
	d3581c1c570cf       259c8277fcbbc                                                                                         6 minutes ago       Running             kube-scheduler            1                   ddcaadef980ac       kube-scheduler-multinode-101100
	b87239d1199ab       c7aad43836fa5                                                                                         6 minutes ago       Running             kube-controller-manager   1                   659643d47b9ae       kube-controller-manager-multinode-101100
	57dea5416eb67       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   76d1b8ce19aba       busybox-fc5497c4f-xqj6w
	76c5ab7859eff       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   8bb49b28c842a       coredns-7db6d8ff4d-4kmx4
	91edaaa00da23       a0bf559e280cf                                                                                         26 minutes ago      Exited              kube-proxy                0                   9bd694480978f       kube-proxy-zhcz6
	e96f94398d6dd       c7aad43836fa5                                                                                         26 minutes ago      Exited              kube-controller-manager   0                   da9268fd6556b       kube-controller-manager-multinode-101100
	964887fc5d362       259c8277fcbbc                                                                                         26 minutes ago      Exited              kube-scheduler            0                   fcb3b27edcd2a       kube-scheduler-multinode-101100
	
	
	==> coredns [76c5ab7859ef] <==
	[INFO] 10.244.0.3:52495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000145803s
	[INFO] 10.244.0.3:46357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066702s
	[INFO] 10.244.0.3:41390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062301s
	[INFO] 10.244.0.3:35739 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084301s
	[INFO] 10.244.0.3:44800 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163303s
	[INFO] 10.244.0.3:57631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068702s
	[INFO] 10.244.0.3:50842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135702s
	[INFO] 10.244.1.2:41210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204604s
	[INFO] 10.244.1.2:57858 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073801s
	[INFO] 10.244.1.2:48782 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152303s
	[INFO] 10.244.1.2:36081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121002s
	[INFO] 10.244.0.3:46909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115002s
	[INFO] 10.244.0.3:36030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220205s
	[INFO] 10.244.0.3:56187 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059401s
	[INFO] 10.244.0.3:51500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099802s
	[INFO] 10.244.1.2:57247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147903s
	[INFO] 10.244.1.2:46132 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170203s
	[INFO] 10.244.1.2:57206 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000452309s
	[INFO] 10.244.1.2:44795 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000146203s
	[INFO] 10.244.0.3:33385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082102s
	[INFO] 10.244.0.3:56742 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173704s
	[INFO] 10.244.0.3:46927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185904s
	[INFO] 10.244.0.3:42956 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000054801s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dcc5a109288b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53257 - 27032 "HINFO IN 6976640239659908905.245956973392320689. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05278328s
	
	
	==> describe nodes <==
	Name:               multinode-101100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=multinode-101100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_13T23_56_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 23:56:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101100
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 May 2024 00:22:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 May 2024 00:22:41 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 May 2024 00:22:41 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 May 2024 00:22:41 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 May 2024 00:22:41 +0000   Tue, 14 May 2024 00:17:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.102.122
	  Hostname:    multinode-101100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5110a322e7104904905e303a94b950b6
	  System UUID:                9b23fe4d-6d34-444b-8185-a84d51d23610
	  Boot ID:                    2e73d191-2dbe-4055-a17d-cff8a9e53a15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xqj6w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-4kmx4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-multinode-101100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m4s
	  kube-system                 kindnet-9q2tv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-multinode-101100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-controller-manager-multinode-101100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-zhcz6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-multinode-101100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 26m                   kube-proxy       
	  Normal  Starting                 6m1s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  26m (x8 over 26m)     kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)     kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m (x7 over 26m)     kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m                   kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  26m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    26m                   kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m                   kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	  Normal  Starting                 26m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                   node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	  Normal  NodeReady                26m                   kubelet          Node multinode-101100 status is now: NodeReady
	  Normal  Starting                 6m10s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m10s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m9s (x8 over 6m10s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x8 over 6m10s)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x7 over 6m10s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m52s                 node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	
	
	Name:               multinode-101100-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=multinode-101100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_14T00_20_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 May 2024 00:20:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 May 2024 00:22:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 May 2024 00:20:26 +0000   Tue, 14 May 2024 00:20:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 May 2024 00:20:26 +0000   Tue, 14 May 2024 00:20:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 May 2024 00:20:26 +0000   Tue, 14 May 2024 00:20:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 May 2024 00:20:26 +0000   Tue, 14 May 2024 00:20:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.97.128
	  Hostname:    multinode-101100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7eac7377d3bb4e40acf99c8af02c1e3b
	  System UUID:                4330851b-5248-f245-9378-5fc25e670b55
	  Boot ID:                    333163f1-b084-4523-b207-0d343c1c025a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5rj9g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 kindnet-2lwsm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-b25hq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m36s                  kube-proxy       
	  Normal  Starting                 23m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)      kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)      kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)      kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                    kubelet          Node multinode-101100-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m40s (x2 over 2m40s)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m40s (x2 over 2m40s)  kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m40s (x2 over 2m40s)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m37s                  node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	  Normal  NodeReady                2m33s                  kubelet          Node multinode-101100-m02 status is now: NodeReady
	
	
	Name:               multinode-101100-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=multinode-101100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_14T00_22_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 May 2024 00:22:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 May 2024 00:22:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 May 2024 00:22:35 +0000   Tue, 14 May 2024 00:22:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 May 2024 00:22:35 +0000   Tue, 14 May 2024 00:22:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 May 2024 00:22:35 +0000   Tue, 14 May 2024 00:22:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 May 2024 00:22:35 +0000   Tue, 14 May 2024 00:22:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.111.37
	  Hostname:    multinode-101100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a8bef3345214f33927ed9bf1f9a1561
	  System UUID:                0ee228e5-87a6-0549-9a8d-1747b73431ee
	  Boot ID:                    e676460f-3a83-4ead-9990-8f26c0c78374
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-tfbt8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-8zsgn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                19m                kubelet          Node multinode-101100-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeReady                10m                kubelet          Node multinode-101100-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29s (x2 over 29s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x2 over 29s)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x2 over 29s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	  Normal  NodeReady                24s                kubelet          Node multinode-101100-m03 status is now: NodeReady
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.692465] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.707713] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.789899] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.282690] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May14 00:16] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.158382] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[ +23.750429] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	[  +0.111929] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.464883] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +0.164872] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	[  +0.194348] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +2.832176] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.181315] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.160798] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	[  +0.238904] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	[  +0.787359] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	[  +0.085936] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.384697] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +1.802132] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.213940] kauditd_printk_skb: 10 callbacks suppressed
	[  +3.471694] systemd-fstab-generator[2315]: Ignoring "noauto" option for root device
	[May14 00:17] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [08450c853590] <==
	{"level":"info","ts":"2024-05-14T00:16:51.816877Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-14T00:16:51.816978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-14T00:16:51.817493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=(7947751373170489359)"}
	{"level":"info","ts":"2024-05-14T00:16:51.817687Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","added-peer-id":"6e4c15c3d0f3380f","added-peer-peer-urls":["https://172.23.106.39:2380"]}
	{"level":"info","ts":"2024-05-14T00:16:51.817911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T00:16:51.818648Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T00:16:51.83299Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-14T00:16:51.834951Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6e4c15c3d0f3380f","initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-14T00:16:51.835138Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-14T00:16:51.835469Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.102.122:2380"}
	{"level":"info","ts":"2024-05-14T00:16:51.835603Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.102.122:2380"}
	{"level":"info","ts":"2024-05-14T00:16:53.468953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-14T00:16:53.469136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-14T00:16:53.469191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgPreVoteResp from 6e4c15c3d0f3380f at term 2"}
	{"level":"info","ts":"2024-05-14T00:16:53.469216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became candidate at term 3"}
	{"level":"info","ts":"2024-05-14T00:16:53.469228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgVoteResp from 6e4c15c3d0f3380f at term 3"}
	{"level":"info","ts":"2024-05-14T00:16:53.469245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became leader at term 3"}
	{"level":"info","ts":"2024-05-14T00:16:53.469259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6e4c15c3d0f3380f elected leader 6e4c15c3d0f3380f at term 3"}
	{"level":"info","ts":"2024-05-14T00:16:53.479025Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6e4c15c3d0f3380f","local-member-attributes":"{Name:multinode-101100 ClientURLs:[https://172.23.102.122:2379]}","request-path":"/0/members/6e4c15c3d0f3380f/attributes","cluster-id":"bb849d1df0b559d7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-14T00:16:53.479459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-14T00:16:53.479642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-14T00:16:53.481317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-14T00:16:53.481353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-14T00:16:53.483334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.102.122:2379"}
	{"level":"info","ts":"2024-05-14T00:16:53.483616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:22:59 up 7 min,  0 users,  load average: 0.16, 0.25, 0.14
	Linux multinode-101100 5.10.207 #1 SMP Thu May 9 02:07:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2b424a7cd98c] <==
	I0514 00:22:09.461164       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:22:09.461184       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:22:19.468724       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:22:19.468903       1 main.go:227] handling current node
	I0514 00:22:19.468925       1 main.go:223] Handling node with IPs: map[172.23.97.128:{}]
	I0514 00:22:19.469023       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:22:19.469415       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:22:19.469619       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:22:29.476554       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:22:29.476584       1 main.go:227] handling current node
	I0514 00:22:29.476595       1 main.go:223] Handling node with IPs: map[172.23.97.128:{}]
	I0514 00:22:29.476601       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:22:39.481925       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:22:39.481965       1 main.go:227] handling current node
	I0514 00:22:39.481976       1 main.go:223] Handling node with IPs: map[172.23.97.128:{}]
	I0514 00:22:39.481983       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:22:39.482550       1 main.go:223] Handling node with IPs: map[172.23.111.37:{}]
	I0514 00:22:39.482635       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.2.0/24] 
	I0514 00:22:39.482701       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.23.111.37 Flags: [] Table: 0} 
	I0514 00:22:49.496628       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:22:49.496737       1 main.go:227] handling current node
	I0514 00:22:49.496751       1 main.go:223] Handling node with IPs: map[172.23.97.128:{}]
	I0514 00:22:49.496759       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:22:49.497337       1 main.go:223] Handling node with IPs: map[172.23.111.37:{}]
	I0514 00:22:49.497449       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [b7d8d9a5e5ea] <==
	I0514 00:16:57.751233       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:16:57.751585       1 main.go:107] hostIP = 172.23.102.122
	podIP = 172.23.102.122
	I0514 00:16:57.752181       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:16:57.752200       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:16:57.752221       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:17:01.123977       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:17:04.195495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:17:07.267636       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:17:10.339619       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:17:13.411801       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [da9e6534cd87] <==
	I0514 00:16:54.938841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 00:16:54.950730       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0514 00:16:54.950897       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0514 00:16:54.951294       1 aggregator.go:165] initial CRD sync complete...
	I0514 00:16:54.951545       1 autoregister_controller.go:141] Starting autoregister controller
	I0514 00:16:54.951793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0514 00:16:54.951875       1 cache.go:39] Caches are synced for autoregister controller
	I0514 00:16:54.962299       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0514 00:16:54.968027       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:16:54.968302       1 policy_source.go:224] refreshing policies
	I0514 00:16:54.997391       1 shared_informer.go:320] Caches are synced for configmaps
	I0514 00:16:54.999391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 00:16:54.999732       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0514 00:16:54.999871       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0514 00:16:55.037244       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 00:16:55.824524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0514 00:16:56.521956       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122 172.23.106.39]
	I0514 00:16:56.523614       1 controller.go:615] quota admission added evaluator for: endpoints
	I0514 00:16:56.536716       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 00:16:57.861026       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0514 00:16:58.068043       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0514 00:16:58.085925       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0514 00:16:58.189328       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 00:16:58.200849       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0514 00:17:16.528300       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122]
	
	
	==> kube-controller-manager [b87239d1199a] <==
	I0514 00:18:01.608844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.702µs"
	I0514 00:18:01.651304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.008µs"
	I0514 00:18:01.710123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.783088ms"
	I0514 00:18:01.711762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.302µs"
	I0514 00:20:06.232732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.947276ms"
	I0514 00:20:06.232825       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.603µs"
	I0514 00:20:06.272284       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.569316ms"
	I0514 00:20:06.272367       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.402µs"
	I0514 00:20:19.847832       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:20:19.864793       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m02" podCIDRs=["10.244.1.0/24"]
	I0514 00:20:20.749261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.103µs"
	I0514 00:20:26.533952       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:20:26.568298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.103µs"
	I0514 00:20:34.823799       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.505µs"
	I0514 00:20:34.839919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.305µs"
	I0514 00:20:34.869792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="165.412µs"
	I0514 00:20:34.913147       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.103µs"
	I0514 00:20:34.918380       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.003µs"
	I0514 00:20:35.952839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.08245ms"
	I0514 00:20:35.953204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.105µs"
	I0514 00:22:24.786914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:22:30.376713       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:22:30.376939       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:22:30.415927       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.2.0/24"]
	I0514 00:22:35.343204       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	
	
	==> kube-controller-manager [e96f94398d6d] <==
	I0513 23:59:02.603699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0513 23:59:22.527169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0513 23:59:45.791856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.887671ms"
	I0513 23:59:45.808219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.096894ms"
	I0513 23:59:45.808747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.005µs"
	I0513 23:59:45.809833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.705µs"
	I0513 23:59:45.811263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.604µs"
	I0513 23:59:48.526617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.926472ms"
	I0513 23:59:48.529326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.302µs"
	I0513 23:59:48.555529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.972453ms"
	I0513 23:59:48.556317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.601µs"
	I0514 00:03:17.563212       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:03:17.565297       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:03:17.579900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.2.0/24"]
	I0514 00:03:17.665892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:03:38.035898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:10:17.797191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:12:39.070271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:12:44.527915       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:12:44.528275       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:12:44.543895       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.3.0/24"]
	I0514 00:12:49.983419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:14:17.920991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:14:33.013074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.740609ms"
	I0514 00:14:33.013918       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.506µs"
	
	
	==> kube-proxy [91edaaa00da2] <==
	I0513 23:56:24.901713       1 server_linux.go:69] "Using iptables proxy"
	I0513 23:56:24.929714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.106.39"]
	I0513 23:56:24.982680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0513 23:56:24.982795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0513 23:56:24.982816       1 server_linux.go:165] "Using iptables Proxier"
	I0513 23:56:24.988669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 23:56:24.989566       1 server.go:872] "Version info" version="v1.30.0"
	I0513 23:56:24.989671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 23:56:24.992700       1 config.go:192] "Starting service config controller"
	I0513 23:56:24.993131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 23:56:24.993327       1 config.go:101] "Starting endpoint slice config controller"
	I0513 23:56:24.993339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 23:56:24.994714       1 config.go:319] "Starting node config controller"
	I0513 23:56:24.994744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 23:56:25.094420       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0513 23:56:25.094530       1 shared_informer.go:320] Caches are synced for service config
	I0513 23:56:25.094981       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b2a1b31cd7de] <==
	I0514 00:16:57.528613       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:16:57.562847       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.122"]
	I0514 00:16:57.701301       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:16:57.701447       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:16:57.701476       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:16:57.708219       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:16:57.708800       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:16:57.708841       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:16:57.712422       1 config.go:192] "Starting service config controller"
	I0514 00:16:57.712733       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:16:57.712780       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:16:57.712824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:16:57.715339       1 config.go:319] "Starting node config controller"
	I0514 00:16:57.717651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:16:57.815732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:16:57.815811       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:16:57.818050       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [964887fc5d36] <==
	E0513 23:56:07.344853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0513 23:56:07.410556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0513 23:56:07.410716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0513 23:56:07.423084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0513 23:56:07.423126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0513 23:56:07.467897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0513 23:56:07.467939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0513 23:56:07.484903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0513 23:56:07.485019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0513 23:56:07.545758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 23:56:07.546087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0513 23:56:07.573884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0513 23:56:07.573980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0513 23:56:07.633780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0513 23:56:07.633901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0513 23:56:07.680821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 23:56:07.680938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0513 23:56:07.704130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 23:56:07.704357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0513 23:56:07.736914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 23:56:07.737079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 23:56:07.754367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0513 23:56:07.754798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0513 23:56:09.676327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0514 00:14:35.689344       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d3581c1c570c] <==
	I0514 00:16:52.716401       1 serving.go:380] Generated self-signed cert in-memory
	W0514 00:16:54.858727       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0514 00:16:54.858778       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0514 00:16:54.858790       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0514 00:16:54.858800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:16:54.945438       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:16:54.945867       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:16:54.953986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:16:54.957180       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:16:54.957284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:16:54.957493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:16:55.058381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 14 00:18:49 multinode-101100 kubelet[1520]: E0514 00:18:49.924631    1520 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:18:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:18:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:18:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:18:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 14 00:19:49 multinode-101100 kubelet[1520]: E0514 00:19:49.922932    1520 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:19:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:19:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:19:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:19:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 14 00:20:49 multinode-101100 kubelet[1520]: E0514 00:20:49.922147    1520 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:20:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:20:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:20:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:20:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 14 00:21:49 multinode-101100 kubelet[1520]: E0514 00:21:49.922718    1520 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:21:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:21:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:21:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:21:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 14 00:22:49 multinode-101100 kubelet[1520]: E0514 00:22:49.927158    1520 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:22:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:22:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:22:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:22:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:22:48.185520    7644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-101100 -n multinode-101100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-101100 -n multinode-101100: (10.6218778s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-101100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (594.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (46.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-101100 node delete m03: exit status 1 (6.2302722s)

                                                
                                                
** stderr ** 
	W0514 00:23:17.791868    2604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-windows-amd64.exe -p multinode-101100 node delete m03": exit status 1
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-101100 status --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:424: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-101100 status --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-101100 -n multinode-101100
E0514 00:23:33.049752    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-101100 -n multinode-101100: (10.5578928s)
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 logs -n 25: (11.3818006s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                          Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:06 UTC | 14 May 24 00:06 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile439564435\001\cp-test_multinode-101100-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:06 UTC | 14 May 24 00:07 UTC |
	|         | multinode-101100-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:07 UTC |
	|         | multinode-101100:/home/docker/cp-test_multinode-101100-m02_multinode-101100.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:07 UTC |
	|         | multinode-101100-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n multinode-101100 sudo cat                                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:07 UTC |
	|         | /home/docker/cp-test_multinode-101100-m02_multinode-101100.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:07 UTC |
	|         | multinode-101100-m03:/home/docker/cp-test_multinode-101100-m02_multinode-101100-m03.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:07 UTC |
	|         | multinode-101100-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n multinode-101100-m03 sudo cat                                                                   | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:07 UTC | 14 May 24 00:08 UTC |
	|         | /home/docker/cp-test_multinode-101100-m02_multinode-101100-m03.txt                                                      |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp testdata\cp-test.txt                                                                                | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | multinode-101100-m03:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | multinode-101100-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile439564435\001\cp-test_multinode-101100-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | multinode-101100-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | multinode-101100:/home/docker/cp-test_multinode-101100-m03_multinode-101100.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:08 UTC |
	|         | multinode-101100-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n multinode-101100 sudo cat                                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:08 UTC | 14 May 24 00:09 UTC |
	|         | /home/docker/cp-test_multinode-101100-m03_multinode-101100.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt                                                       | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:09 UTC | 14 May 24 00:09 UTC |
	|         | multinode-101100-m02:/home/docker/cp-test_multinode-101100-m03_multinode-101100-m02.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n                                                                                                 | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:09 UTC | 14 May 24 00:09 UTC |
	|         | multinode-101100-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-101100 ssh -n multinode-101100-m02 sudo cat                                                                   | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:09 UTC | 14 May 24 00:09 UTC |
	|         | /home/docker/cp-test_multinode-101100-m03_multinode-101100-m02.txt                                                      |                  |                   |         |                     |                     |
	| node    | multinode-101100 node stop m03                                                                                          | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:09 UTC | 14 May 24 00:09 UTC |
	| node    | multinode-101100 node start                                                                                             | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:10 UTC | 14 May 24 00:12 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                              |                  |                   |         |                     |                     |
	| node    | list -p multinode-101100                                                                                                | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:13 UTC |                     |
	| stop    | -p multinode-101100                                                                                                     | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:13 UTC | 14 May 24 00:14 UTC |
	| start   | -p multinode-101100                                                                                                     | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:14 UTC | 14 May 24 00:22 UTC |
	|         | --wait=true -v=8                                                                                                        |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                       |                  |                   |         |                     |                     |
	| node    | list -p multinode-101100                                                                                                | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:22 UTC |                     |
	| node    | multinode-101100 node delete                                                                                            | multinode-101100 | minikube5\jenkins | v1.33.1 | 14 May 24 00:23 UTC |                     |
	|         | m03                                                                                                                     |                  |                   |         |                     |                     |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/14 00:14:56
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0514 00:14:56.185714    4316 out.go:291] Setting OutFile to fd 880 ...
	I0514 00:14:56.186038    4316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 00:14:56.186038    4316 out.go:304] Setting ErrFile to fd 968...
	I0514 00:14:56.186038    4316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 00:14:56.205486    4316 out.go:298] Setting JSON to false
	I0514 00:14:56.208459    4316 start.go:129] hostinfo: {"hostname":"minikube5","uptime":7259,"bootTime":1715638436,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0514 00:14:56.208459    4316 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0514 00:14:56.349739    4316 out.go:177] * [multinode-101100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0514 00:14:56.395109    4316 notify.go:220] Checking for updates...
	I0514 00:14:56.554164    4316 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:14:56.757342    4316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0514 00:14:56.904945    4316 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0514 00:14:57.042288    4316 out.go:177]   - MINIKUBE_LOCATION=18872
	I0514 00:14:57.142144    4316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0514 00:14:57.296370    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:14:57.296934    4316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0514 00:15:02.363917    4316 out.go:177] * Using the hyperv driver based on existing profile
	I0514 00:15:02.408815    4316 start.go:297] selected driver: hyperv
	I0514 00:15:02.409275    4316 start.go:901] validating driver "hyperv" against &{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.106.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.102.231 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fa
lse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 00:15:02.409586    4316 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0514 00:15:02.452500    4316 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 00:15:02.453496    4316 cni.go:84] Creating CNI manager for ""
	I0514 00:15:02.453496    4316 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0514 00:15:02.453639    4316 start.go:340] cluster config:
	{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.106.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.102.231 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 00:15:02.453996    4316 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0514 00:15:02.507202    4316 out.go:177] * Starting "multinode-101100" primary control-plane node in "multinode-101100" cluster
	I0514 00:15:02.510874    4316 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 00:15:02.511223    4316 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0514 00:15:02.511223    4316 cache.go:56] Caching tarball of preloaded images
	I0514 00:15:02.511411    4316 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0514 00:15:02.511411    4316 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0514 00:15:02.512312    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:15:02.515317    4316 start.go:360] acquireMachinesLock for multinode-101100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0514 00:15:02.515317    4316 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-101100"
	I0514 00:15:02.515317    4316 start.go:96] Skipping create...Using existing machine configuration
	I0514 00:15:02.515317    4316 fix.go:54] fixHost starting: 
	I0514 00:15:02.516003    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:05.006202    4316 main.go:141] libmachine: [stdout =====>] : Off
	
	I0514 00:15:05.006370    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:05.006370    4316 fix.go:112] recreateIfNeeded on multinode-101100: state=Stopped err=<nil>
	W0514 00:15:05.006370    4316 fix.go:138] unexpected machine state, will restart: <nil>
	I0514 00:15:05.009270    4316 out.go:177] * Restarting existing hyperv VM for "multinode-101100" ...
	I0514 00:15:05.013132    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-101100
	I0514 00:15:07.915262    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:15:07.915443    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:07.915443    4316 main.go:141] libmachine: Waiting for host to start...
	I0514 00:15:07.915506    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:09.985756    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:09.985756    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:09.985756    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:12.281832    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:15:12.281832    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:13.296646    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:15.289244    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:15.290314    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:15.290314    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:17.554873    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:15:17.554873    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:18.569060    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:20.499826    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:20.499826    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:20.499826    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:22.713351    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:15:22.713351    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:23.725580    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:25.689973    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:25.690050    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:25.690050    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:27.970131    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:15:27.970543    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:28.974492    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:30.950015    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:30.950015    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:30.950015    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:33.269358    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:33.269970    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:33.271964    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:35.155916    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:35.155916    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:35.155916    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:37.425806    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:37.426548    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:37.426548    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:15:37.428923    4316 machine.go:94] provisionDockerMachine start ...
	I0514 00:15:37.429023    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:39.378767    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:39.378767    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:39.379476    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:41.660453    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:41.660453    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:41.664778    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:15:41.665371    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:15:41.665371    4316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 00:15:41.789131    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0514 00:15:41.789131    4316 buildroot.go:166] provisioning hostname "multinode-101100"
	I0514 00:15:41.789131    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:43.658216    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:43.658741    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:43.658741    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:45.959367    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:45.959803    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:45.963564    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:15:45.964004    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:15:45.964004    4316 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-101100 && echo "multinode-101100" | sudo tee /etc/hostname
	I0514 00:15:46.113194    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101100
	
	I0514 00:15:46.113194    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:48.037299    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:48.037299    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:48.037299    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:50.304945    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:50.304945    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:50.309336    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:15:50.309848    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:15:50.309848    4316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-101100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-101100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-101100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 00:15:50.454395    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 00:15:50.454566    4316 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 00:15:50.454566    4316 buildroot.go:174] setting up certificates
	I0514 00:15:50.454566    4316 provision.go:84] configureAuth start
	I0514 00:15:50.454566    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:52.344110    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:52.344807    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:52.345142    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:54.665648    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:54.665648    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:54.665648    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:15:56.577827    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:15:56.577827    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:56.578937    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:15:58.947308    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:15:58.947418    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:15:58.947418    4316 provision.go:143] copyHostCerts
	I0514 00:15:58.947598    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0514 00:15:58.947775    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 00:15:58.947867    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 00:15:58.948155    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 00:15:58.949029    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0514 00:15:58.949250    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 00:15:58.949250    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 00:15:58.949547    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 00:15:58.950364    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0514 00:15:58.950364    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 00:15:58.950364    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 00:15:58.950364    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 00:15:58.951662    4316 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-101100 san=[127.0.0.1 172.23.102.122 localhost minikube multinode-101100]
	I0514 00:15:59.389335    4316 provision.go:177] copyRemoteCerts
	I0514 00:15:59.398611    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 00:15:59.398740    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:01.402063    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:01.402063    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:01.403107    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:03.739112    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:03.739112    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:03.739112    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:16:03.845665    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4467383s)
	I0514 00:16:03.845735    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0514 00:16:03.845857    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0514 00:16:03.899538    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0514 00:16:03.899960    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0514 00:16:03.950478    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0514 00:16:03.950478    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 00:16:03.991804    4316 provision.go:87] duration metric: took 13.5364113s to configureAuth
	I0514 00:16:03.991894    4316 buildroot.go:189] setting minikube options for container-runtime
	I0514 00:16:03.992600    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:16:03.992696    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:05.864478    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:05.864478    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:05.864478    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:08.115704    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:08.115704    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:08.118812    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:16:08.119401    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:16:08.119401    4316 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 00:16:08.248745    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 00:16:08.248818    4316 buildroot.go:70] root file system type: tmpfs
	I0514 00:16:08.248916    4316 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 00:16:08.248916    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:10.126009    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:10.126009    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:10.126666    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:12.366162    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:12.366162    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:12.370602    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:16:12.371197    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:16:12.371197    4316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 00:16:12.518398    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 00:16:12.518469    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:14.346708    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:14.346708    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:14.346708    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:16.561242    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:16.561352    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:16.566359    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:16:16.566886    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:16:16.567001    4316 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 00:16:18.958992    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 00:16:18.958992    4316 machine.go:97] duration metric: took 41.5275329s to provisionDockerMachine
	I0514 00:16:18.959976    4316 start.go:293] postStartSetup for "multinode-101100" (driver="hyperv")
	I0514 00:16:18.959976    4316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 00:16:18.968760    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 00:16:18.968760    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:20.830444    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:20.830444    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:20.830963    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:23.021443    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:23.021443    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:23.022004    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:16:23.127972    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1589562s)
	I0514 00:16:23.135911    4316 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 00:16:23.142708    4316 command_runner.go:130] > NAME=Buildroot
	I0514 00:16:23.142770    4316 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0514 00:16:23.142840    4316 command_runner.go:130] > ID=buildroot
	I0514 00:16:23.142840    4316 command_runner.go:130] > VERSION_ID=2023.02.9
	I0514 00:16:23.142894    4316 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0514 00:16:23.142975    4316 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 00:16:23.142975    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 00:16:23.142975    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 00:16:23.144321    4316 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 00:16:23.144321    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0514 00:16:23.152311    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 00:16:23.167204    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 00:16:23.208551    4316 start.go:296] duration metric: took 4.2483151s for postStartSetup
	I0514 00:16:23.208609    4316 fix.go:56] duration metric: took 1m20.6883818s for fixHost
	I0514 00:16:23.208676    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:25.059477    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:25.059477    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:25.059477    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:27.251865    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:27.251865    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:27.255851    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:16:27.255933    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:16:27.255933    4316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 00:16:27.393753    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715645787.622992710
	
	I0514 00:16:27.393753    4316 fix.go:216] guest clock: 1715645787.622992710
	I0514 00:16:27.393859    4316 fix.go:229] Guest: 2024-05-14 00:16:27.62299271 +0000 UTC Remote: 2024-05-14 00:16:23.2086094 +0000 UTC m=+87.138302401 (delta=4.41438331s)
	I0514 00:16:27.394004    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:29.282211    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:29.282211    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:29.282298    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:31.521171    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:31.521171    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:31.524707    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:16:31.525326    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.102.122 22 <nil> <nil>}
	I0514 00:16:31.525326    4316 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715645787
	I0514 00:16:31.656871    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 00:16:27 UTC 2024
	
	I0514 00:16:31.656871    4316 fix.go:236] clock set: Tue May 14 00:16:27 UTC 2024
	 (err=<nil>)
	I0514 00:16:31.656871    4316 start.go:83] releasing machines lock for "multinode-101100", held for 1m29.136123s
	I0514 00:16:31.657876    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:33.514775    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:33.514775    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:33.515311    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:35.727156    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:35.727479    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:35.730496    4316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 00:16:35.730708    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:35.737940    4316 ssh_runner.go:195] Run: cat /version.json
	I0514 00:16:35.737940    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:16:37.650826    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:37.651706    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:37.651766    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:37.653750    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:16:37.653750    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:37.653750    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:16:39.992402    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:39.992402    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:39.992716    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:16:40.013262    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:16:40.013262    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:16:40.013982    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:16:40.170923    4316 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0514 00:16:40.170923    4316 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.440079s)
	I0514 00:16:40.170923    4316 command_runner.go:130] > {"iso_version": "v1.33.1", "kicbase_version": "v0.0.43-1714992375-18804", "minikube_version": "v1.33.1", "commit": "d6e0d89dd5607476c1efbac5f05c928d4cd7ef53"}
	I0514 00:16:40.170923    4316 ssh_runner.go:235] Completed: cat /version.json: (4.432709s)
	I0514 00:16:40.181732    4316 ssh_runner.go:195] Run: systemctl --version
	I0514 00:16:40.190102    4316 command_runner.go:130] > systemd 252 (252)
	I0514 00:16:40.190102    4316 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0514 00:16:40.201494    4316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0514 00:16:40.209136    4316 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0514 00:16:40.209862    4316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 00:16:40.217883    4316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 00:16:40.244144    4316 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0514 00:16:40.244710    4316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 00:16:40.244777    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:16:40.244814    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:16:40.274963    4316 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0514 00:16:40.285057    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 00:16:40.315083    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 00:16:40.341864    4316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 00:16:40.352949    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 00:16:40.378197    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:16:40.403394    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 00:16:40.434406    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:16:40.462651    4316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 00:16:40.488861    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 00:16:40.517167    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 00:16:40.548685    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 00:16:40.577045    4316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 00:16:40.591943    4316 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0514 00:16:40.600861    4316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 00:16:40.626460    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:40.820490    4316 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 00:16:40.852637    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:16:40.863007    4316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 00:16:40.883155    4316 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0514 00:16:40.883155    4316 command_runner.go:130] > [Unit]
	I0514 00:16:40.883155    4316 command_runner.go:130] > Description=Docker Application Container Engine
	I0514 00:16:40.883155    4316 command_runner.go:130] > Documentation=https://docs.docker.com
	I0514 00:16:40.883155    4316 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0514 00:16:40.883155    4316 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0514 00:16:40.883155    4316 command_runner.go:130] > StartLimitBurst=3
	I0514 00:16:40.883155    4316 command_runner.go:130] > StartLimitIntervalSec=60
	I0514 00:16:40.883155    4316 command_runner.go:130] > [Service]
	I0514 00:16:40.883155    4316 command_runner.go:130] > Type=notify
	I0514 00:16:40.883155    4316 command_runner.go:130] > Restart=on-failure
	I0514 00:16:40.883155    4316 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0514 00:16:40.883597    4316 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0514 00:16:40.883597    4316 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0514 00:16:40.883597    4316 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0514 00:16:40.883597    4316 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0514 00:16:40.883597    4316 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0514 00:16:40.883695    4316 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0514 00:16:40.883695    4316 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0514 00:16:40.883695    4316 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0514 00:16:40.883695    4316 command_runner.go:130] > ExecStart=
	I0514 00:16:40.883775    4316 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0514 00:16:40.883775    4316 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0514 00:16:40.883775    4316 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0514 00:16:40.883775    4316 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0514 00:16:40.883861    4316 command_runner.go:130] > LimitNOFILE=infinity
	I0514 00:16:40.883861    4316 command_runner.go:130] > LimitNPROC=infinity
	I0514 00:16:40.883861    4316 command_runner.go:130] > LimitCORE=infinity
	I0514 00:16:40.883861    4316 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0514 00:16:40.883861    4316 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0514 00:16:40.883928    4316 command_runner.go:130] > TasksMax=infinity
	I0514 00:16:40.883928    4316 command_runner.go:130] > TimeoutStartSec=0
	I0514 00:16:40.883928    4316 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0514 00:16:40.883928    4316 command_runner.go:130] > Delegate=yes
	I0514 00:16:40.883928    4316 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0514 00:16:40.883992    4316 command_runner.go:130] > KillMode=process
	I0514 00:16:40.883992    4316 command_runner.go:130] > [Install]
	I0514 00:16:40.883992    4316 command_runner.go:130] > WantedBy=multi-user.target
	I0514 00:16:40.893446    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:16:40.921952    4316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 00:16:40.955515    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:16:40.983495    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:16:41.012286    4316 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 00:16:41.067488    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:16:41.087023    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:16:41.116335    4316 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0514 00:16:41.127189    4316 ssh_runner.go:195] Run: which cri-dockerd
	I0514 00:16:41.133000    4316 command_runner.go:130] > /usr/bin/cri-dockerd
	I0514 00:16:41.141763    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 00:16:41.157407    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 00:16:41.199050    4316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 00:16:41.372093    4316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 00:16:41.524964    4316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 00:16:41.525288    4316 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 00:16:41.562963    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:41.735982    4316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 00:16:44.313444    4316 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5773018s)
	I0514 00:16:44.322479    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 00:16:44.357441    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:16:44.389854    4316 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 00:16:44.571917    4316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 00:16:44.733604    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:44.907417    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 00:16:44.941956    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:16:44.971809    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:45.153688    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 00:16:45.270309    4316 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 00:16:45.279530    4316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 00:16:45.292735    4316 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0514 00:16:45.292735    4316 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0514 00:16:45.292735    4316 command_runner.go:130] > Device: 0,22	Inode: 856         Links: 1
	I0514 00:16:45.292735    4316 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0514 00:16:45.292735    4316 command_runner.go:130] > Access: 2024-05-14 00:16:45.408202295 +0000
	I0514 00:16:45.292735    4316 command_runner.go:130] > Modify: 2024-05-14 00:16:45.408202295 +0000
	I0514 00:16:45.292735    4316 command_runner.go:130] > Change: 2024-05-14 00:16:45.412202572 +0000
	I0514 00:16:45.292735    4316 command_runner.go:130] >  Birth: -
	I0514 00:16:45.292735    4316 start.go:562] Will wait 60s for crictl version
	I0514 00:16:45.302798    4316 ssh_runner.go:195] Run: which crictl
	I0514 00:16:45.309565    4316 command_runner.go:130] > /usr/bin/crictl
	I0514 00:16:45.318466    4316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 00:16:45.363979    4316 command_runner.go:130] > Version:  0.1.0
	I0514 00:16:45.364568    4316 command_runner.go:130] > RuntimeName:  docker
	I0514 00:16:45.364568    4316 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0514 00:16:45.364568    4316 command_runner.go:130] > RuntimeApiVersion:  v1
	I0514 00:16:45.365985    4316 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 00:16:45.373806    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:16:45.398333    4316 command_runner.go:130] > 26.0.2
	I0514 00:16:45.406271    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:16:45.434253    4316 command_runner.go:130] > 26.0.2
	I0514 00:16:45.439147    4316 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 00:16:45.439323    4316 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 00:16:45.443156    4316 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 00:16:45.443156    4316 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 00:16:45.443211    4316 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 00:16:45.443211    4316 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 00:16:45.445096    4316 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 00:16:45.445096    4316 ip.go:210] interface addr: 172.23.96.1/20
	I0514 00:16:45.452094    4316 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 00:16:45.458825    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:16:45.478357    4316 kubeadm.go:877] updating cluster {Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.122 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.102.231 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-
provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0514 00:16:45.478606    4316 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 00:16:45.485091    4316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0514 00:16:45.506395    4316 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0514 00:16:45.506395    4316 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0514 00:16:45.506395    4316 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0514 00:16:45.506395    4316 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0514 00:16:45.506395    4316 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0514 00:16:45.506395    4316 docker.go:615] Images already preloaded, skipping extraction
	I0514 00:16:45.514627    4316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0514 00:16:45.535349    4316 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0514 00:16:45.535349    4316 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0514 00:16:45.535799    4316 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0514 00:16:45.535799    4316 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0514 00:16:45.536313    4316 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0514 00:16:45.536398    4316 cache_images.go:84] Images are preloaded, skipping loading
	I0514 00:16:45.536398    4316 kubeadm.go:928] updating node { 172.23.102.122 8443 v1.30.0 docker true true} ...
	I0514 00:16:45.536570    4316 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-101100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.102.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0514 00:16:45.543082    4316 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0514 00:16:45.571915    4316 command_runner.go:130] > cgroupfs
	I0514 00:16:45.572196    4316 cni.go:84] Creating CNI manager for ""
	I0514 00:16:45.572196    4316 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0514 00:16:45.572264    4316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0514 00:16:45.572343    4316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.102.122 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-101100 NodeName:multinode-101100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.102.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.102.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0514 00:16:45.572629    4316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.102.122
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-101100"
	  kubeletExtraArgs:
	    node-ip: 172.23.102.122
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.102.122"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0514 00:16:45.584627    4316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 00:16:45.603423    4316 command_runner.go:130] > kubeadm
	I0514 00:16:45.603457    4316 command_runner.go:130] > kubectl
	I0514 00:16:45.603457    4316 command_runner.go:130] > kubelet
	I0514 00:16:45.603511    4316 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 00:16:45.613121    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0514 00:16:45.629761    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0514 00:16:45.668552    4316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 00:16:45.696749    4316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0514 00:16:45.737685    4316 ssh_runner.go:195] Run: grep 172.23.102.122	control-plane.minikube.internal$ /etc/hosts
	I0514 00:16:45.744447    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.102.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:16:45.770880    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:45.928609    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:16:45.953422    4316 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100 for IP: 172.23.102.122
	I0514 00:16:45.953422    4316 certs.go:194] generating shared ca certs ...
	I0514 00:16:45.953422    4316 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:45.954202    4316 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 00:16:45.954389    4316 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 00:16:45.954389    4316 certs.go:256] generating profile certs ...
	I0514 00:16:45.955082    4316 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\client.key
	I0514 00:16:45.955155    4316 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.d596c974
	I0514 00:16:45.955155    4316 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.d596c974 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.102.122]
	I0514 00:16:46.073965    4316 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.d596c974 ...
	I0514 00:16:46.073965    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.d596c974: {Name:mk0abe85a6f763d7b15aec7cf028af93a3b41188 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:46.075203    4316 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.d596c974 ...
	I0514 00:16:46.075203    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.d596c974: {Name:mkc641951683ee38c2ef89b0e9f4e36ad27cbf87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:46.075830    4316 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt.d596c974 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt
	I0514 00:16:46.086730    4316 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key.d596c974 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key
	I0514 00:16:46.088198    4316 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key
	I0514 00:16:46.088198    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0514 00:16:46.088590    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0514 00:16:46.088590    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0514 00:16:46.088590    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0514 00:16:46.088590    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0514 00:16:46.088590    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0514 00:16:46.089189    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0514 00:16:46.089189    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0514 00:16:46.089783    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 00:16:46.089783    4316 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 00:16:46.089783    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 00:16:46.089783    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 00:16:46.090380    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 00:16:46.090380    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 00:16:46.090949    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 00:16:46.091048    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0514 00:16:46.091048    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:16:46.091048    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0514 00:16:46.092203    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 00:16:46.136905    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 00:16:46.185458    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 00:16:46.233388    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 00:16:46.277608    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0514 00:16:46.320142    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0514 00:16:46.362716    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0514 00:16:46.405730    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0514 00:16:46.447453    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 00:16:46.488234    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 00:16:46.530905    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 00:16:46.578512    4316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0514 00:16:46.623582    4316 ssh_runner.go:195] Run: openssl version
	I0514 00:16:46.631851    4316 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0514 00:16:46.641440    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 00:16:46.666121    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 00:16:46.672639    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:16:46.673480    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:16:46.681837    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 00:16:46.689880    4316 command_runner.go:130] > 3ec20f2e
	I0514 00:16:46.699676    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 00:16:46.728150    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 00:16:46.754886    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:16:46.761345    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:16:46.761345    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:16:46.770119    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:16:46.781912    4316 command_runner.go:130] > b5213941
	I0514 00:16:46.790612    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 00:16:46.817917    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 00:16:46.846604    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 00:16:46.854720    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:16:46.854720    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:16:46.866338    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 00:16:46.874929    4316 command_runner.go:130] > 51391683
	I0514 00:16:46.885080    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 00:16:46.914689    4316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 00:16:46.922686    4316 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 00:16:46.922686    4316 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0514 00:16:46.922686    4316 command_runner.go:130] > Device: 8,1	Inode: 4196178     Links: 1
	I0514 00:16:46.922875    4316 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0514 00:16:46.922928    4316 command_runner.go:130] > Access: 2024-05-13 23:55:59.004892352 +0000
	I0514 00:16:46.922928    4316 command_runner.go:130] > Modify: 2024-05-13 23:55:59.004892352 +0000
	I0514 00:16:46.922928    4316 command_runner.go:130] > Change: 2024-05-13 23:55:59.004892352 +0000
	I0514 00:16:46.922928    4316 command_runner.go:130] >  Birth: 2024-05-13 23:55:59.004892352 +0000
	I0514 00:16:46.932037    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0514 00:16:46.940908    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:46.949788    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0514 00:16:46.958930    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:46.968372    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0514 00:16:46.977315    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:46.985536    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0514 00:16:46.995597    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:47.002968    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0514 00:16:47.011730    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:47.019252    4316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0514 00:16:47.027599    4316 command_runner.go:130] > Certificate will not expire
	I0514 00:16:47.029084    4316 kubeadm.go:391] StartCluster: {Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.122 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.109.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.102.231 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 00:16:47.036513    4316 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 00:16:47.065945    4316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0514 00:16:47.082783    4316 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0514 00:16:47.082874    4316 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0514 00:16:47.082874    4316 command_runner.go:130] > /var/lib/minikube/etcd:
	I0514 00:16:47.082874    4316 command_runner.go:130] > member
	W0514 00:16:47.082994    4316 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0514 00:16:47.082994    4316 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0514 00:16:47.083053    4316 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0514 00:16:47.091039    4316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0514 00:16:47.109091    4316 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0514 00:16:47.110220    4316 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-101100" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:16:47.110619    4316 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-101100" cluster setting kubeconfig missing "multinode-101100" context setting]
	I0514 00:16:47.111367    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:47.123911    4316 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:16:47.124910    4316 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.102.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 00:16:47.125257    4316 cert_rotation.go:137] Starting client certificate rotation controller
	I0514 00:16:47.134253    4316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0514 00:16:47.150072    4316 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0514 00:16:47.150072    4316 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0514 00:16:47.151207    4316 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0514 00:16:47.151207    4316 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0514 00:16:47.151207    4316 command_runner.go:130] >  kind: InitConfiguration
	I0514 00:16:47.151207    4316 command_runner.go:130] >  localAPIEndpoint:
	I0514 00:16:47.151207    4316 command_runner.go:130] > -  advertiseAddress: 172.23.106.39
	I0514 00:16:47.151207    4316 command_runner.go:130] > +  advertiseAddress: 172.23.102.122
	I0514 00:16:47.151207    4316 command_runner.go:130] >    bindPort: 8443
	I0514 00:16:47.151259    4316 command_runner.go:130] >  bootstrapTokens:
	I0514 00:16:47.151259    4316 command_runner.go:130] >    - groups:
	I0514 00:16:47.151259    4316 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0514 00:16:47.151259    4316 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0514 00:16:47.151285    4316 command_runner.go:130] >    name: "multinode-101100"
	I0514 00:16:47.151285    4316 command_runner.go:130] >    kubeletExtraArgs:
	I0514 00:16:47.151285    4316 command_runner.go:130] > -    node-ip: 172.23.106.39
	I0514 00:16:47.151285    4316 command_runner.go:130] > +    node-ip: 172.23.102.122
	I0514 00:16:47.151285    4316 command_runner.go:130] >    taints: []
	I0514 00:16:47.151285    4316 command_runner.go:130] >  ---
	I0514 00:16:47.151285    4316 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0514 00:16:47.151347    4316 command_runner.go:130] >  kind: ClusterConfiguration
	I0514 00:16:47.151411    4316 command_runner.go:130] >  apiServer:
	I0514 00:16:47.151411    4316 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.23.106.39"]
	I0514 00:16:47.151411    4316 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.23.102.122"]
	I0514 00:16:47.151411    4316 command_runner.go:130] >    extraArgs:
	I0514 00:16:47.151411    4316 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0514 00:16:47.151411    4316 command_runner.go:130] >  controllerManager:
	I0514 00:16:47.151655    4316 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.23.106.39
	+  advertiseAddress: 172.23.102.122
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-101100"
	   kubeletExtraArgs:
	-    node-ip: 172.23.106.39
	+    node-ip: 172.23.102.122
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.23.106.39"]
	+  certSANs: ["127.0.0.1", "localhost", "172.23.102.122"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0514 00:16:47.151740    4316 kubeadm.go:1154] stopping kube-system containers ...
	I0514 00:16:47.159132    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 00:16:47.182689    4316 command_runner.go:130] > 76c5ab7859ef
	I0514 00:16:47.182765    4316 command_runner.go:130] > e6ee22ee5c1b
	I0514 00:16:47.182765    4316 command_runner.go:130] > 8f7c140951f4
	I0514 00:16:47.182765    4316 command_runner.go:130] > 8bb49b28c842
	I0514 00:16:47.182805    4316 command_runner.go:130] > 9c4eb727cedb
	I0514 00:16:47.182805    4316 command_runner.go:130] > 91edaaa00da2
	I0514 00:16:47.182805    4316 command_runner.go:130] > 90d7537422a8
	I0514 00:16:47.182834    4316 command_runner.go:130] > 9bd694480978
	I0514 00:16:47.182834    4316 command_runner.go:130] > eda79d47d28f
	I0514 00:16:47.182834    4316 command_runner.go:130] > e96f94398d6d
	I0514 00:16:47.182874    4316 command_runner.go:130] > 964887fc5d36
	I0514 00:16:47.182874    4316 command_runner.go:130] > 06f1a683cad8
	I0514 00:16:47.182905    4316 command_runner.go:130] > da9268fd6556
	I0514 00:16:47.182905    4316 command_runner.go:130] > 287e744a4dc2
	I0514 00:16:47.182905    4316 command_runner.go:130] > ad0550a5dabf
	I0514 00:16:47.182905    4316 command_runner.go:130] > fcb3b27edcd2
	I0514 00:16:47.182974    4316 docker.go:483] Stopping containers: [76c5ab7859ef e6ee22ee5c1b 8f7c140951f4 8bb49b28c842 9c4eb727cedb 91edaaa00da2 90d7537422a8 9bd694480978 eda79d47d28f e96f94398d6d 964887fc5d36 06f1a683cad8 da9268fd6556 287e744a4dc2 ad0550a5dabf fcb3b27edcd2]
	I0514 00:16:47.190450    4316 ssh_runner.go:195] Run: docker stop 76c5ab7859ef e6ee22ee5c1b 8f7c140951f4 8bb49b28c842 9c4eb727cedb 91edaaa00da2 90d7537422a8 9bd694480978 eda79d47d28f e96f94398d6d 964887fc5d36 06f1a683cad8 da9268fd6556 287e744a4dc2 ad0550a5dabf fcb3b27edcd2
	I0514 00:16:47.209602    4316 command_runner.go:130] > 76c5ab7859ef
	I0514 00:16:47.209602    4316 command_runner.go:130] > e6ee22ee5c1b
	I0514 00:16:47.209602    4316 command_runner.go:130] > 8f7c140951f4
	I0514 00:16:47.214857    4316 command_runner.go:130] > 8bb49b28c842
	I0514 00:16:47.214857    4316 command_runner.go:130] > 9c4eb727cedb
	I0514 00:16:47.214857    4316 command_runner.go:130] > 91edaaa00da2
	I0514 00:16:47.214857    4316 command_runner.go:130] > 90d7537422a8
	I0514 00:16:47.214857    4316 command_runner.go:130] > 9bd694480978
	I0514 00:16:47.214857    4316 command_runner.go:130] > eda79d47d28f
	I0514 00:16:47.215291    4316 command_runner.go:130] > e96f94398d6d
	I0514 00:16:47.215357    4316 command_runner.go:130] > 964887fc5d36
	I0514 00:16:47.215357    4316 command_runner.go:130] > 06f1a683cad8
	I0514 00:16:47.215357    4316 command_runner.go:130] > da9268fd6556
	I0514 00:16:47.215970    4316 command_runner.go:130] > 287e744a4dc2
	I0514 00:16:47.216196    4316 command_runner.go:130] > ad0550a5dabf
	I0514 00:16:47.216196    4316 command_runner.go:130] > fcb3b27edcd2
	I0514 00:16:47.228375    4316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0514 00:16:47.261413    4316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0514 00:16:47.276310    4316 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0514 00:16:47.276310    4316 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0514 00:16:47.276928    4316 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0514 00:16:47.277108    4316 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0514 00:16:47.277261    4316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0514 00:16:47.277301    4316 kubeadm.go:156] found existing configuration files:
	
	I0514 00:16:47.289673    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0514 00:16:47.306806    4316 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0514 00:16:47.306806    4316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0514 00:16:47.317953    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0514 00:16:47.341565    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0514 00:16:47.357495    4316 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0514 00:16:47.357495    4316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0514 00:16:47.365755    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0514 00:16:47.391097    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0514 00:16:47.406813    4316 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0514 00:16:47.407574    4316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0514 00:16:47.417933    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0514 00:16:47.442107    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0514 00:16:47.462703    4316 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0514 00:16:47.463307    4316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0514 00:16:47.471330    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0514 00:16:47.496097    4316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0514 00:16:47.512818    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:47.719250    4316 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0514 00:16:47.719250    4316 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0514 00:16:47.719747    4316 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0514 00:16:47.720034    4316 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0514 00:16:47.721726    4316 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0514 00:16:47.721812    4316 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0514 00:16:47.723049    4316 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0514 00:16:47.723386    4316 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0514 00:16:47.723740    4316 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0514 00:16:47.723740    4316 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0514 00:16:47.724272    4316 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0514 00:16:47.727309    4316 command_runner.go:130] > [certs] Using the existing "sa" key
	I0514 00:16:47.729797    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:49.151260    4316 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0514 00:16:49.151750    4316 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0514 00:16:49.151750    4316 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0514 00:16:49.151750    4316 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0514 00:16:49.151750    4316 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0514 00:16:49.151855    4316 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0514 00:16:49.151855    4316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4219695s)
	I0514 00:16:49.151855    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:49.238314    4316 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0514 00:16:49.239346    4316 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0514 00:16:49.239346    4316 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0514 00:16:49.414673    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:49.515362    4316 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0514 00:16:49.515486    4316 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0514 00:16:49.515486    4316 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0514 00:16:49.515486    4316 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0514 00:16:49.515592    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:49.609805    4316 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0514 00:16:49.609955    4316 api_server.go:52] waiting for apiserver process to appear ...
	I0514 00:16:49.621169    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:16:50.127859    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:16:50.635206    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:16:51.124082    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:16:51.633189    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:16:51.656202    4316 command_runner.go:130] > 1838
	I0514 00:16:51.657036    4316 api_server.go:72] duration metric: took 2.0470115s to wait for apiserver process to appear ...
	I0514 00:16:51.657239    4316 api_server.go:88] waiting for apiserver healthz status ...
	I0514 00:16:51.657363    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:54.585189    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0514 00:16:54.585189    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0514 00:16:54.585189    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:54.624538    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0514 00:16:54.624538    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0514 00:16:54.665959    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:54.707569    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 00:16:54.707646    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 00:16:55.172587    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:55.182411    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 00:16:55.182507    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 00:16:55.659989    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:55.673996    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 00:16:55.673996    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 00:16:56.166856    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:56.183940    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 00:16:56.183940    4316 api_server.go:103] status: https://172.23.102.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 00:16:56.658537    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:16:56.671344    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 200:
	ok
	I0514 00:16:56.671578    4316 round_trippers.go:463] GET https://172.23.102.122:8443/version
	I0514 00:16:56.671578    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:56.671578    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:56.671578    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:56.705098    4316 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0514 00:16:56.705098    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:56.705098    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:56 GMT
	I0514 00:16:56.705098    4316 round_trippers.go:580]     Audit-Id: c7c20ff9-70cd-4060-84d7-ec8bf3825c2a
	I0514 00:16:56.705098    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:56.705098    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:56.705098    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:56.705098    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:56.705098    4316 round_trippers.go:580]     Content-Length: 263
	I0514 00:16:56.705911    4316 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0514 00:16:56.706007    4316 api_server.go:141] control plane version: v1.30.0
	I0514 00:16:56.706007    4316 api_server.go:131] duration metric: took 5.048412s to wait for apiserver health ...
	I0514 00:16:56.706007    4316 cni.go:84] Creating CNI manager for ""
	I0514 00:16:56.706007    4316 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0514 00:16:56.708331    4316 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0514 00:16:56.718220    4316 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0514 00:16:56.724796    4316 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0514 00:16:56.725261    4316 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0514 00:16:56.725261    4316 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0514 00:16:56.725261    4316 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0514 00:16:56.725261    4316 command_runner.go:130] > Access: 2024-05-14 00:15:32.198040600 +0000
	I0514 00:16:56.725345    4316 command_runner.go:130] > Modify: 2024-05-09 03:04:38.000000000 +0000
	I0514 00:16:56.725345    4316 command_runner.go:130] > Change: 2024-05-14 00:15:21.020000000 +0000
	I0514 00:16:56.725345    4316 command_runner.go:130] >  Birth: -
	I0514 00:16:56.725491    4316 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0514 00:16:56.725491    4316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0514 00:16:56.783701    4316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0514 00:16:57.633064    4316 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0514 00:16:57.633064    4316 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0514 00:16:57.633382    4316 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0514 00:16:57.633382    4316 command_runner.go:130] > daemonset.apps/kindnet configured
	I0514 00:16:57.633464    4316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 00:16:57.633662    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:16:57.633662    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:57.633662    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:57.633662    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:57.639096    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:16:57.640095    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:57.640095    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:57.640095    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:57.640095    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:57.640095    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:57 GMT
	I0514 00:16:57.640095    4316 round_trippers.go:580]     Audit-Id: c0dd21b6-0c47-4067-b310-9b08bd0f7eec
	I0514 00:16:57.640180    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:57.641605    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1736"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87830 chars]
	I0514 00:16:57.647515    4316 system_pods.go:59] 12 kube-system pods found
	I0514 00:16:57.648051    4316 system_pods.go:61] "coredns-7db6d8ff4d-4kmx4" [06858a47-f51b-48d8-a2a6-f60b8107be13] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0514 00:16:57.648051    4316 system_pods.go:61] "etcd-multinode-101100" [74cd34fe-a56b-453d-afb3-a9db3db0d5ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0514 00:16:57.648051    4316 system_pods.go:61] "kindnet-2lwsm" [26b8beff-9849-4cbf-9a2b-8ef6354fa5ca] Running
	I0514 00:16:57.648051    4316 system_pods.go:61] "kindnet-9q2tv" [5b3ee167-f21f-46b3-bace-03a7233717e0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0514 00:16:57.648051    4316 system_pods.go:61] "kindnet-tfbt8" [95a6d195-9e10-4569-902b-b56e495c9b86] Running
	I0514 00:16:57.648051    4316 system_pods.go:61] "kube-apiserver-multinode-101100" [60889645-4c2d-4cfc-b322-c0f1b6e34503] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0514 00:16:57.648051    4316 system_pods.go:61] "kube-controller-manager-multinode-101100" [1a74381a-7477-4fd3-b344-c4a230014f97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0514 00:16:57.648152    4316 system_pods.go:61] "kube-proxy-8zsgn" [af208cbd-fa8a-4822-9b19-dc30f63fa59c] Running
	I0514 00:16:57.648152    4316 system_pods.go:61] "kube-proxy-b25hq" [d39f5818-3e88-4162-a7ce-734ca28103bf] Running
	I0514 00:16:57.648152    4316 system_pods.go:61] "kube-proxy-zhcz6" [a9a488af-41ba-47f3-87b0-5a2f062afad6] Running
	I0514 00:16:57.648152    4316 system_pods.go:61] "kube-scheduler-multinode-101100" [d7300c2d-377f-4061-bd34-5f7593b7e827] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0514 00:16:57.648152    4316 system_pods.go:61] "storage-provisioner" [a92f04b8-a93f-42d8-81d7-d4da6bf2e247] Running
	I0514 00:16:57.648197    4316 system_pods.go:74] duration metric: took 14.6876ms to wait for pod list to return data ...
	I0514 00:16:57.648197    4316 node_conditions.go:102] verifying NodePressure condition ...
	I0514 00:16:57.648239    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes
	I0514 00:16:57.648239    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:57.648239    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:57.648239    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:57.652816    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:16:57.653275    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:57.653275    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:57.653275    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:57.653275    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:57.653275    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:57 GMT
	I0514 00:16:57.653275    4316 round_trippers.go:580]     Audit-Id: 52f8cd9b-9478-4a5b-b2a9-7058f635ac93
	I0514 00:16:57.653275    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:57.653275    4316 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1736"},"items":[{"metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16289 chars]
	I0514 00:16:57.654007    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:16:57.654007    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:16:57.654007    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:16:57.654007    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:16:57.654007    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:16:57.654007    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:16:57.654007    4316 node_conditions.go:105] duration metric: took 5.8098ms to run NodePressure ...
	I0514 00:16:57.654007    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 00:16:57.891879    4316 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0514 00:16:57.985373    4316 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0514 00:16:57.989862    4316 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0514 00:16:57.990024    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0514 00:16:57.990024    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:57.990024    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:57.990077    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:57.996623    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:16:57.996623    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:57.996623    4316 round_trippers.go:580]     Audit-Id: c7babe1e-ef01-4342-82cc-e0291869b4ea
	I0514 00:16:57.996623    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:57.996623    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:57.996623    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:57.996623    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:57.996623    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:57.997762    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1740"},"items":[{"metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"74cd34fe-a56b-453d-afb3-a9db3db0d5ba","resourceVersion":"1710","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.122:2379","kubernetes.io/config.hash":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.mirror":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.seen":"2024-05-14T00:16:49.843121737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0514 00:16:57.999809    4316 kubeadm.go:733] kubelet initialised
	I0514 00:16:57.999912    4316 kubeadm.go:734] duration metric: took 10.05ms waiting for restarted kubelet to initialise ...
	I0514 00:16:57.999912    4316 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:16:58.000170    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:16:58.000170    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.000170    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.000170    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.004319    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:16:58.004319    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.004319    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.004319    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.004319    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.004319    4316 round_trippers.go:580]     Audit-Id: d35a7077-59c8-46af-8259-69aafd6d932f
	I0514 00:16:58.004319    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.004319    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.005490    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1740"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87830 chars]
	I0514 00:16:58.009394    4316 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.009512    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:16:58.009512    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.009512    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.009512    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.011831    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.011831    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.011831    4316 round_trippers.go:580]     Audit-Id: 7cbc2aea-a828-4341-b384-2cb1cc2ef98e
	I0514 00:16:58.011831    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.011831    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.011831    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.012786    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.012786    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.012855    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:16:58.013479    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:58.013542    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.013542    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.013542    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.015734    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.015734    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.015734    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.015734    4316 round_trippers.go:580]     Audit-Id: 0550ae30-001d-4590-99e5-444c9cac4998
	I0514 00:16:58.015734    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.015734    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.015734    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.015734    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.015734    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:58.015734    4316 pod_ready.go:97] node "multinode-101100" hosting pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.015734    4316 pod_ready.go:81] duration metric: took 6.3395ms for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.015734    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.015734    4316 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.015734    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-101100
	I0514 00:16:58.015734    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.016742    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.016742    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.018829    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.018829    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.018829    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.018829    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.018829    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.018829    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.018829    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.018829    4316 round_trippers.go:580]     Audit-Id: 82e3e21c-c444-40fb-90c7-62e3d45c1350
	I0514 00:16:58.019732    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"74cd34fe-a56b-453d-afb3-a9db3db0d5ba","resourceVersion":"1710","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.122:2379","kubernetes.io/config.hash":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.mirror":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.seen":"2024-05-14T00:16:49.843121737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0514 00:16:58.020200    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:58.020200    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.020200    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.020200    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.022398    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.022398    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.022398    4316 round_trippers.go:580]     Audit-Id: 1645c3a5-0c58-4f60-9aad-35356d67c1b2
	I0514 00:16:58.022398    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.022398    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.022398    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.022398    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.022398    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.022724    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:58.022724    4316 pod_ready.go:97] node "multinode-101100" hosting pod "etcd-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.022724    4316 pod_ready.go:81] duration metric: took 6.9898ms for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.022724    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "etcd-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.022724    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.023276    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-101100
	I0514 00:16:58.023276    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.023276    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.023276    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.029528    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:16:58.029528    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.029528    4316 round_trippers.go:580]     Audit-Id: 537c2268-6ff9-44f9-9117-ddc11414a511
	I0514 00:16:58.029528    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.029528    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.029528    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.029528    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.029528    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.029528    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-101100","namespace":"kube-system","uid":"60889645-4c2d-4cfc-b322-c0f1b6e34503","resourceVersion":"1709","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.122:8443","kubernetes.io/config.hash":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.mirror":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.seen":"2024-05-14T00:16:49.778409853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0514 00:16:58.030474    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:58.030474    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.030474    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.030474    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.033230    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.033230    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.033230    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.033230    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.033230    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.033230    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.033230    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.033230    4316 round_trippers.go:580]     Audit-Id: e5f41acb-e690-4a95-8a06-aff24eb7d538
	I0514 00:16:58.033230    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:58.033230    4316 pod_ready.go:97] node "multinode-101100" hosting pod "kube-apiserver-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.033230    4316 pod_ready.go:81] duration metric: took 10.5055ms for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.033230    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "kube-apiserver-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.033230    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.033230    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-101100
	I0514 00:16:58.033230    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.033230    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.033230    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.037041    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.037041    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.037041    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.037041    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.037041    4316 round_trippers.go:580]     Audit-Id: 91361b84-5dad-467f-b832-80619abdfac3
	I0514 00:16:58.037041    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.037041    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.037041    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.037582    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-101100","namespace":"kube-system","uid":"1a74381a-7477-4fd3-b344-c4a230014f97","resourceVersion":"1704","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.mirror":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.seen":"2024-05-13T23:56:09.392106640Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0514 00:16:58.038066    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:58.038120    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.038120    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.038120    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.040357    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.040357    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.040357    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.040357    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.040357    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.040357    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.040357    4316 round_trippers.go:580]     Audit-Id: 37ad5494-c885-496c-b557-e7961e1bdbfb
	I0514 00:16:58.040357    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.040357    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:58.040357    4316 pod_ready.go:97] node "multinode-101100" hosting pod "kube-controller-manager-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.040357    4316 pod_ready.go:81] duration metric: took 7.1259ms for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.040357    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "kube-controller-manager-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:58.040357    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.237011    4316 request.go:629] Waited for 196.6424ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:16:58.237323    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:16:58.237323    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.237323    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.237323    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.240917    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:58.241232    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.241232    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.241232    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.241232    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.241232    4316 round_trippers.go:580]     Audit-Id: 96720c27-9fb4-4bf9-8a0d-51a2002d1f62
	I0514 00:16:58.241232    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.241232    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.241232    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8zsgn","generateName":"kube-proxy-","namespace":"kube-system","uid":"af208cbd-fa8a-4822-9b19-dc30f63fa59c","resourceVersion":"1621","creationTimestamp":"2024-05-14T00:03:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0514 00:16:58.441599    4316 request.go:629] Waited for 199.1243ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:16:58.442008    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:16:58.442182    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.442253    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.442253    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.446036    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:58.446036    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.446036    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.446036    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.446036    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.446036    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.446036    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.446036    4316 round_trippers.go:580]     Audit-Id: d1de8feb-0016-4798-a45b-5a1efd685a68
	I0514 00:16:58.446534    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"fd2d4a0b-dc97-4959-b2ba-0f51719ad2b3","resourceVersion":"1631","creationTimestamp":"2024-05-14T00:12:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_12_45_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:12:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0514 00:16:58.446636    4316 pod_ready.go:97] node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:16:58.446636    4316 pod_ready.go:81] duration metric: took 406.2541ms for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.446636    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:16:58.446636    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:58.641794    4316 request.go:629] Waited for 195.0509ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:16:58.642006    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:16:58.642006    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.642006    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.642123    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.645512    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:58.645889    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.645889    4316 round_trippers.go:580]     Audit-Id: bebc959d-6568-4027-8765-e2df5b294951
	I0514 00:16:58.645889    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.645889    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.645889    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.645889    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.645889    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:58 GMT
	I0514 00:16:58.646455    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-b25hq","generateName":"kube-proxy-","namespace":"kube-system","uid":"d39f5818-3e88-4162-a7ce-734ca28103bf","resourceVersion":"1641","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0514 00:16:58.844737    4316 request.go:629] Waited for 197.231ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:16:58.845129    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:16:58.845129    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:58.845129    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:58.845129    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:58.848706    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:16:58.848706    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:58.848706    4316 round_trippers.go:580]     Audit-Id: fbd681a6-2f5a-4f26-9724-f358a491c712
	I0514 00:16:58.848706    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:58.848706    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:58.848706    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:58.848706    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:58.848706    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:59 GMT
	I0514 00:16:58.848706    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"1642","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4485 chars]
	I0514 00:16:58.849465    4316 pod_ready.go:97] node "multinode-101100-m02" hosting pod "kube-proxy-b25hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m02" has status "Ready":"Unknown"
	I0514 00:16:58.849465    4316 pod_ready.go:81] duration metric: took 402.8036ms for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:58.849465    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100-m02" hosting pod "kube-proxy-b25hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m02" has status "Ready":"Unknown"
	I0514 00:16:58.849465    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:59.049100    4316 request.go:629] Waited for 199.4984ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:16:59.049330    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:16:59.049330    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:59.049330    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:59.049330    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:59.055481    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:16:59.055481    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:59.055481    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:59.055481    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:59.055481    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:59.055481    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:59 GMT
	I0514 00:16:59.055481    4316 round_trippers.go:580]     Audit-Id: aec56393-54ad-44f8-b47f-d1e7de7abac4
	I0514 00:16:59.055481    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:59.056154    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zhcz6","generateName":"kube-proxy-","namespace":"kube-system","uid":"a9a488af-41ba-47f3-87b0-5a2f062afad6","resourceVersion":"1732","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0514 00:16:59.236203    4316 request.go:629] Waited for 179.3737ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:59.236581    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:59.236581    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:59.236581    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:59.236581    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:59.240944    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:59.240944    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:59.241033    4316 round_trippers.go:580]     Audit-Id: 55207bf2-b020-41a6-8c4b-727e05a5a996
	I0514 00:16:59.241033    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:59.241033    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:59.241033    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:59.241033    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:59.241033    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:59 GMT
	I0514 00:16:59.241327    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:59.242013    4316 pod_ready.go:97] node "multinode-101100" hosting pod "kube-proxy-zhcz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:59.242088    4316 pod_ready.go:81] duration metric: took 392.5992ms for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:59.242088    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "kube-proxy-zhcz6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:59.242088    4316 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:16:59.439671    4316 request.go:629] Waited for 197.1241ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:16:59.439671    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:16:59.439671    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:59.439671    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:59.439671    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:59.443238    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:59.443238    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:59.443238    4316 round_trippers.go:580]     Audit-Id: 4b4375a1-8177-41f3-8456-500a26c3533d
	I0514 00:16:59.443238    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:59.443930    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:59.443930    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:59.443930    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:59.443930    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:59 GMT
	I0514 00:16:59.444103    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-101100","namespace":"kube-system","uid":"d7300c2d-377f-4061-bd34-5f7593b7e827","resourceVersion":"1707","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.mirror":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.seen":"2024-05-13T23:56:09.392108241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0514 00:16:59.643851    4316 request.go:629] Waited for 199.065ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:59.643933    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:59.643933    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:59.643933    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:59.643933    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:59.647809    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:16:59.647809    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:59.647809    4316 round_trippers.go:580]     Audit-Id: 105eb453-6f2b-40c5-8fce-367c353c5334
	I0514 00:16:59.647809    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:59.647809    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:59.647809    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:59.647809    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:59.647809    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:16:59 GMT
	I0514 00:16:59.647809    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:16:59.649098    4316 pod_ready.go:97] node "multinode-101100" hosting pod "kube-scheduler-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:59.649177    4316 pod_ready.go:81] duration metric: took 407.0633ms for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	E0514 00:16:59.649177    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100" hosting pod "kube-scheduler-multinode-101100" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100" has status "Ready":"False"
	I0514 00:16:59.649272    4316 pod_ready.go:38] duration metric: took 1.6491558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:16:59.649363    4316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0514 00:16:59.664951    4316 command_runner.go:130] > -16
	I0514 00:16:59.665391    4316 ops.go:34] apiserver oom_adj: -16
	I0514 00:16:59.665391    4316 kubeadm.go:591] duration metric: took 12.5815566s to restartPrimaryControlPlane
	I0514 00:16:59.665391    4316 kubeadm.go:393] duration metric: took 12.6355889s to StartCluster
	I0514 00:16:59.665435    4316 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:59.665435    4316 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:16:59.667441    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:16:59.669204    4316 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.102.122 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 00:16:59.675726    4316 out.go:177] * Verifying Kubernetes components...
	I0514 00:16:59.669204    4316 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0514 00:16:59.669667    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:16:59.682526    4316 out.go:177] * Enabled addons: 
	I0514 00:16:59.685601    4316 addons.go:505] duration metric: took 16.4853ms for enable addons: enabled=[]
	I0514 00:16:59.689164    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:16:59.965406    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:16:59.992213    4316 node_ready.go:35] waiting up to 6m0s for node "multinode-101100" to be "Ready" ...
	I0514 00:16:59.992480    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:16:59.992501    4316 round_trippers.go:469] Request Headers:
	I0514 00:16:59.992501    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:16:59.992501    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:16:59.998685    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:16:59.998685    4316 round_trippers.go:577] Response Headers:
	I0514 00:16:59.998685    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:00 GMT
	I0514 00:16:59.998685    4316 round_trippers.go:580]     Audit-Id: cc997441-e608-4041-a627-6c2e185c47bb
	I0514 00:16:59.998685    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:16:59.998685    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:16:59.998685    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:16:59.998685    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:16:59.998685    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:00.504088    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:00.504088    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:00.504088    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:00.504227    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:00.508406    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:00.508406    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:00.508496    4316 round_trippers.go:580]     Audit-Id: 24eee483-7e08-4d31-8dfc-84088194d730
	I0514 00:17:00.508496    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:00.508496    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:00.508496    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:00.508496    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:00.508496    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:00 GMT
	I0514 00:17:00.509076    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:01.000406    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:01.000666    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:01.000666    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:01.000666    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:01.004008    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:01.004008    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:01.004185    4316 round_trippers.go:580]     Audit-Id: 37917791-6276-4b98-9b71-15aeddb0a44b
	I0514 00:17:01.004185    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:01.004185    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:01.004185    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:01.004185    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:01.004185    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:01 GMT
	I0514 00:17:01.004185    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:01.501049    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:01.501126    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:01.501126    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:01.501126    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:01.505226    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:01.505226    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:01.505759    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:01.505759    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:01.505759    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:01.505759    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:01 GMT
	I0514 00:17:01.505759    4316 round_trippers.go:580]     Audit-Id: 7fece720-b918-4aef-b59b-b9df2381c9b5
	I0514 00:17:01.505759    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:01.505892    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:02.001731    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:02.001731    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:02.001731    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:02.001731    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:02.005364    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:02.005364    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:02.005364    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:02.005364    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:02.005736    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:02 GMT
	I0514 00:17:02.005736    4316 round_trippers.go:580]     Audit-Id: 2c191887-d19b-4933-ae16-5d204480ef80
	I0514 00:17:02.005736    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:02.005736    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:02.006128    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:02.007256    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:02.499979    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:02.500312    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:02.500312    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:02.500312    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:02.504440    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:02.504489    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:02.504489    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:02.504489    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:02.504489    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:02.504581    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:02.504581    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:02 GMT
	I0514 00:17:02.504581    4316 round_trippers.go:580]     Audit-Id: abbe0b62-7463-4a66-b671-b911b205de9d
	I0514 00:17:02.504802    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:03.000811    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:03.000956    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:03.000956    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:03.000956    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:03.005771    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:03.005771    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:03.005771    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:03.006468    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:03.006598    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:03 GMT
	I0514 00:17:03.006598    4316 round_trippers.go:580]     Audit-Id: dd9e868b-ee23-4f7d-8a2b-ea95bd9c3cee
	I0514 00:17:03.006598    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:03.006598    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:03.006966    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:03.500457    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:03.500457    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:03.500457    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:03.500457    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:03.504836    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:03.504836    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:03.504836    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:03.504836    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:03.504836    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:03.504836    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:03 GMT
	I0514 00:17:03.504836    4316 round_trippers.go:580]     Audit-Id: c4a8b267-fc64-474b-b9ee-bf6bf6edf98f
	I0514 00:17:03.504836    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:03.505459    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:03.999148    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:03.999148    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:03.999148    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:03.999148    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:04.003159    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:04.003159    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:04.003159    4316 round_trippers.go:580]     Audit-Id: 8c75461a-1b8e-4d6d-b3af-79918329b9a3
	I0514 00:17:04.003159    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:04.003159    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:04.003159    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:04.003159    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:04.003159    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:04 GMT
	I0514 00:17:04.003159    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:04.498170    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:04.498170    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:04.498170    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:04.498170    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:04.502661    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:04.502661    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:04.502661    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:04.502661    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:04.502661    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:04.502661    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:04.502661    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:04 GMT
	I0514 00:17:04.502661    4316 round_trippers.go:580]     Audit-Id: 40859645-6768-4e34-9836-dc90f4e3cac3
	I0514 00:17:04.502975    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:04.503735    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:04.997006    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:04.997006    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:04.997206    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:04.997206    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:04.999954    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:04.999954    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:04.999954    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:04.999954    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:04.999954    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:05 GMT
	I0514 00:17:04.999954    4316 round_trippers.go:580]     Audit-Id: bae1c90f-f7ec-42c0-9c4f-08b8aab803ce
	I0514 00:17:04.999954    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:04.999954    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:05.000625    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:05.494904    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:05.494904    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:05.494904    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:05.494904    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:05.499039    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:05.499092    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:05.499092    4316 round_trippers.go:580]     Audit-Id: 3a8866cf-e13e-4fb2-8c89-8496bd033786
	I0514 00:17:05.499092    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:05.499092    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:05.499092    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:05.499092    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:05.499092    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:05 GMT
	I0514 00:17:05.499092    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:05.996017    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:05.996248    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:05.996248    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:05.996248    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:06.000605    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:06.000840    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:06.000840    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:06 GMT
	I0514 00:17:06.000840    4316 round_trippers.go:580]     Audit-Id: 33eef445-7b1a-4454-9aca-231e5e0096e7
	I0514 00:17:06.000840    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:06.000840    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:06.000840    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:06.000840    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:06.001010    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:06.494357    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:06.494594    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:06.494594    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:06.494594    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:06.497951    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:06.497951    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:06.497951    4316 round_trippers.go:580]     Audit-Id: c6b28072-705f-4ea0-a13c-29f6b4b6b056
	I0514 00:17:06.497951    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:06.497951    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:06.497951    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:06.497951    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:06.497951    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:06 GMT
	I0514 00:17:06.498932    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:06.998763    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:06.998862    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:06.998862    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:06.998862    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:07.001571    4316 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0514 00:17:07.001571    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:07.001571    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:07.001571    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:07.001571    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:07.001571    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:07.001571    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:07 GMT
	I0514 00:17:07.001571    4316 round_trippers.go:580]     Audit-Id: 2b1d7c25-562a-4cc3-ba93-30f5f9e5f048
	I0514 00:17:07.001571    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:07.002325    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:07.500066    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:07.500144    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:07.500144    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:07.500144    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:07.503499    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:07.503499    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:07.503499    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:07.503499    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:07.503499    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:07.503499    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:07.503499    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:07 GMT
	I0514 00:17:07.503499    4316 round_trippers.go:580]     Audit-Id: f31b5e5f-6015-42cd-8e85-3e2cdb8c97e4
	I0514 00:17:07.504356    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1660","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0514 00:17:08.001178    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:08.001178    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:08.001178    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:08.001178    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:08.004866    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:08.004866    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:08.004866    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:08 GMT
	I0514 00:17:08.004866    4316 round_trippers.go:580]     Audit-Id: 1a879941-aa3c-4aad-bc7a-d6adf682914d
	I0514 00:17:08.004866    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:08.004866    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:08.004866    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:08.004866    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:08.005717    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:08.497834    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:08.497834    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:08.497834    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:08.497834    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:08.501585    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:08.501585    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:08.501585    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:08.501585    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:08.501736    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:08.501736    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:08.501736    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:08 GMT
	I0514 00:17:08.501736    4316 round_trippers.go:580]     Audit-Id: 221981f1-96b4-44db-9c65-caa46decdcc5
	I0514 00:17:08.501973    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:08.997338    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:08.997426    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:08.997426    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:08.997426    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:09.004125    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:09.004125    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:09.004125    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:09.004125    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:09 GMT
	I0514 00:17:09.004125    4316 round_trippers.go:580]     Audit-Id: e466722a-540b-4937-bfb7-0c896c9ccb5b
	I0514 00:17:09.004125    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:09.004125    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:09.004125    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:09.005086    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:09.005086    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:09.501403    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:09.501403    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:09.501403    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:09.501403    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:09.507067    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:09.507067    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:09.507067    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:09.507598    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:09.507598    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:09.507598    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:09 GMT
	I0514 00:17:09.507598    4316 round_trippers.go:580]     Audit-Id: cbbdf0d0-2f6a-4e29-81c1-3c4a6efa2c46
	I0514 00:17:09.507598    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:09.507945    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:10.003691    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:10.003976    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:10.004060    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:10.004060    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:10.006954    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:10.007561    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:10.007561    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:10 GMT
	I0514 00:17:10.007561    4316 round_trippers.go:580]     Audit-Id: 229b886c-cb94-4ee4-bbe6-4bcc5bd051dd
	I0514 00:17:10.007561    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:10.007561    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:10.007561    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:10.007681    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:10.007896    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:10.501059    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:10.501059    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:10.501059    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:10.501059    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:10.504760    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:10.505683    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:10.505683    4316 round_trippers.go:580]     Audit-Id: 3b002635-5e87-4db8-85dc-c81e205c958f
	I0514 00:17:10.505683    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:10.505683    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:10.505683    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:10.505683    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:10.505683    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:10 GMT
	I0514 00:17:10.506062    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:11.003157    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:11.003545    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:11.003545    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:11.003545    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:11.011813    4316 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0514 00:17:11.011813    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:11.011813    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:11.011813    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:11.011813    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:11.011813    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:11.011813    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:11 GMT
	I0514 00:17:11.011813    4316 round_trippers.go:580]     Audit-Id: ff6a6741-0a52-4f07-8aa6-2c4bc8ff79fe
	I0514 00:17:11.011813    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:11.011813    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:11.503487    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:11.503487    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:11.503487    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:11.503565    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:11.507407    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:11.507464    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:11.507464    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:11.507464    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:11 GMT
	I0514 00:17:11.507464    4316 round_trippers.go:580]     Audit-Id: 5df9f0ec-c129-41b8-a618-215b48a6ef67
	I0514 00:17:11.507464    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:11.507464    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:11.507464    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:11.507464    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:12.005131    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:12.005131    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:12.005131    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:12.005229    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:12.008510    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:12.008729    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:12.008729    4316 round_trippers.go:580]     Audit-Id: 3d4c88a8-bd66-4282-b6c5-b345b1dde78b
	I0514 00:17:12.008729    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:12.008729    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:12.008825    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:12.008825    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:12.008825    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:12 GMT
	I0514 00:17:12.009103    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:12.502348    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:12.502348    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:12.502348    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:12.502348    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:12.509251    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:12.509251    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:12.509251    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:12 GMT
	I0514 00:17:12.509251    4316 round_trippers.go:580]     Audit-Id: 1555e88a-068d-42bd-9d7b-7a52f617e216
	I0514 00:17:12.509251    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:12.509251    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:12.509251    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:12.509251    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:12.509251    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:13.004350    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:13.004547    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:13.004547    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:13.004547    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:13.007419    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:13.008127    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:13.008127    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:13.008127    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:13 GMT
	I0514 00:17:13.008225    4316 round_trippers.go:580]     Audit-Id: 9435aa9e-e42e-4c1e-b278-8a56ff8b06be
	I0514 00:17:13.008225    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:13.008225    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:13.008225    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:13.008662    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:13.506181    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:13.506612    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:13.506612    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:13.506612    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:13.513906    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:13.513906    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:13.513906    4316 round_trippers.go:580]     Audit-Id: 1fd4b84f-bf3b-4eda-b37a-2467411fa5f8
	I0514 00:17:13.513906    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:13.513906    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:13.513906    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:13.513906    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:13.513906    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:13 GMT
	I0514 00:17:13.513906    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:13.514559    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:14.007980    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:14.008291    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:14.008291    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:14.008291    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:14.011685    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:14.012085    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:14.012085    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:14.012085    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:14.012085    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:14 GMT
	I0514 00:17:14.012085    4316 round_trippers.go:580]     Audit-Id: e1b51e40-3b3f-495a-8694-a3d7610858fd
	I0514 00:17:14.012085    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:14.012085    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:14.012584    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:14.493730    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:14.493730    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:14.493730    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:14.493730    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:14.497995    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:14.498082    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:14.498082    4316 round_trippers.go:580]     Audit-Id: 221d4596-8973-404b-ad7c-67e3e171c1c8
	I0514 00:17:14.498082    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:14.498082    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:14.498082    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:14.498082    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:14.498082    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:14 GMT
	I0514 00:17:14.498082    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:15.006074    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:15.006074    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:15.006074    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:15.006074    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:15.009748    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:15.009748    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:15.009748    4316 round_trippers.go:580]     Audit-Id: 9edf3cb0-a6a3-44c5-8b1a-2e66b38cce51
	I0514 00:17:15.009748    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:15.009748    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:15.010208    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:15.010208    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:15.010208    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:15 GMT
	I0514 00:17:15.010354    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:15.494447    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:15.494447    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:15.494522    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:15.494522    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:15.499662    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:15.500217    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:15.500217    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:15.500217    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:15.500217    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:15.500217    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:15.500217    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:15 GMT
	I0514 00:17:15.500339    4316 round_trippers.go:580]     Audit-Id: 895290ed-8eba-4c7c-94a6-b78d4dcc56bd
	I0514 00:17:15.500377    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:15.994939    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:15.995027    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:15.995027    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:15.995027    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:15.998439    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:15.998439    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:15.998439    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:15.998439    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:15.998439    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:15.998439    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:15.998439    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:16 GMT
	I0514 00:17:15.998439    4316 round_trippers.go:580]     Audit-Id: fec4758a-5cb2-45cb-adfd-b9cdc0dadde2
	I0514 00:17:16.002011    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:16.002627    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:16.496044    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:16.496307    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:16.496381    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:16.496381    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:16.505041    4316 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0514 00:17:16.505041    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:16.505226    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:16 GMT
	I0514 00:17:16.505226    4316 round_trippers.go:580]     Audit-Id: bbbf4538-dad2-42fb-8b32-36e25e0b7e24
	I0514 00:17:16.505226    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:16.505226    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:16.505226    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:16.505226    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:16.505497    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:16.995618    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:16.995618    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:16.995975    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:16.995975    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:17.002562    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:17.002562    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:17.002562    4316 round_trippers.go:580]     Audit-Id: 0e591535-ac13-4407-8ce4-b9fb09d627cb
	I0514 00:17:17.002562    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:17.002562    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:17.002562    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:17.002562    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:17.002562    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:17 GMT
	I0514 00:17:17.003179    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:17.509026    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:17.509261    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:17.509261    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:17.509261    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:17.516782    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:17.516782    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:17.516782    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:17.516782    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:17.516782    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:17 GMT
	I0514 00:17:17.516782    4316 round_trippers.go:580]     Audit-Id: 3af16802-b6ae-4f85-833a-a044cbaeac1f
	I0514 00:17:17.516782    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:17.516782    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:17.516782    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:17.994933    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:17.994933    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:17.994933    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:17.994933    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:17.999521    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:17.999758    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:17.999758    4316 round_trippers.go:580]     Audit-Id: 59135110-920e-4b83-8b73-f58b4239205c
	I0514 00:17:17.999758    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:17.999758    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:17.999758    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:17.999758    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:17.999758    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:18 GMT
	I0514 00:17:18.000575    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:18.508707    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:18.508707    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:18.508815    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:18.508815    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:18.513115    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:18.513115    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:18.513115    4316 round_trippers.go:580]     Audit-Id: 479205e8-3c96-4338-82da-5e2ece09b2a9
	I0514 00:17:18.513115    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:18.513115    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:18.513223    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:18.513223    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:18.513223    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:18 GMT
	I0514 00:17:18.513223    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:18.513915    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:19.007141    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:19.007141    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:19.007141    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:19.007141    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:19.010735    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:19.010735    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:19.010735    4316 round_trippers.go:580]     Audit-Id: f7500649-d19f-478b-9507-76341986dee8
	I0514 00:17:19.010735    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:19.010735    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:19.010735    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:19.010735    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:19.010735    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:19 GMT
	I0514 00:17:19.011373    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:19.504284    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:19.504284    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:19.504284    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:19.504284    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:19.509050    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:19.509050    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:19.509050    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:19.509050    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:19 GMT
	I0514 00:17:19.509050    4316 round_trippers.go:580]     Audit-Id: e0317e84-fa61-483d-bf47-278b5128a9ad
	I0514 00:17:19.509050    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:19.509050    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:19.509050    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:19.509050    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:20.004793    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:20.004793    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:20.004882    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:20.004882    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:20.011393    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:20.011393    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:20.011393    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:20.011393    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:20.011393    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:20.011393    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:20 GMT
	I0514 00:17:20.011393    4316 round_trippers.go:580]     Audit-Id: e03104a6-fe19-4c40-926b-af0b58a3371f
	I0514 00:17:20.011393    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:20.012090    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:20.503277    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:20.503277    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:20.503357    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:20.503357    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:20.507687    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:20.507687    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:20.508195    4316 round_trippers.go:580]     Audit-Id: 81721e37-febf-417c-91d2-b94ae71958df
	I0514 00:17:20.508195    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:20.508195    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:20.508195    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:20.508195    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:20.508195    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:20 GMT
	I0514 00:17:20.508737    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:21.003439    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:21.003439    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:21.003576    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:21.003576    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:21.006737    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:21.006737    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:21.006737    4316 round_trippers.go:580]     Audit-Id: eb73f551-8506-4cbc-a46a-194448e260a7
	I0514 00:17:21.006737    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:21.006737    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:21.006737    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:21.006737    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:21.006737    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:21 GMT
	I0514 00:17:21.008340    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:21.008788    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:21.502201    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:21.502276    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:21.502276    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:21.502347    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:21.506139    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:21.506139    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:21.506139    4316 round_trippers.go:580]     Audit-Id: 69973cd4-3fc9-4861-8dfc-cbffa11d7466
	I0514 00:17:21.506139    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:21.506139    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:21.506139    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:21.506139    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:21.506139    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:21 GMT
	I0514 00:17:21.506139    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:22.001877    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:22.001877    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:22.001877    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:22.002176    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:22.005781    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:22.005781    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:22.005781    4316 round_trippers.go:580]     Audit-Id: 34ccf271-3d80-422f-83f3-a1097ded2732
	I0514 00:17:22.005781    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:22.005781    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:22.005781    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:22.005781    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:22.005781    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:22 GMT
	I0514 00:17:22.006653    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:22.503919    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:22.503919    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:22.503919    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:22.503919    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:22.508448    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:22.508448    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:22.508918    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:22 GMT
	I0514 00:17:22.508918    4316 round_trippers.go:580]     Audit-Id: 7ec04073-2193-4304-82b6-63ac74c95951
	I0514 00:17:22.508918    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:22.508918    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:22.508918    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:22.508918    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:22.509337    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:23.002506    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:23.002506    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:23.002506    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:23.002506    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:23.006672    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:23.006672    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:23.006672    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:23 GMT
	I0514 00:17:23.006672    4316 round_trippers.go:580]     Audit-Id: 4868224d-54d0-4d4a-a3f6-fb1a956fb101
	I0514 00:17:23.006672    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:23.006672    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:23.006672    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:23.006672    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:23.006672    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:23.499973    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:23.500188    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:23.500188    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:23.500188    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:23.506468    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:23.506468    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:23.506468    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:23.506468    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:23 GMT
	I0514 00:17:23.506468    4316 round_trippers.go:580]     Audit-Id: f2404b85-a611-4dd6-a0d1-e06e2c446b8f
	I0514 00:17:23.506468    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:23.506468    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:23.506468    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:23.506468    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:23.507187    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:24.000785    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:24.000785    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:24.000785    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:24.000871    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:24.006782    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:24.006782    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:24.006782    4316 round_trippers.go:580]     Audit-Id: e2dc4502-bf0b-4588-b95d-5022699196e6
	I0514 00:17:24.006782    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:24.006782    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:24.006782    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:24.006782    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:24.006782    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:24 GMT
	I0514 00:17:24.007754    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:24.500261    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:24.500261    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:24.500261    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:24.500261    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:24.503843    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:24.503843    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:24.503843    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:24.503843    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:24.503843    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:24.503843    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:24.503843    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:24 GMT
	I0514 00:17:24.503843    4316 round_trippers.go:580]     Audit-Id: cc556569-cefc-4af6-96ee-35e43cde5d74
	I0514 00:17:24.504062    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:24.998350    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:24.998411    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:24.998411    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:24.998411    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:25.003124    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:25.003124    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:25.003124    4316 round_trippers.go:580]     Audit-Id: 2a73aaae-2a39-4ddf-9ab8-9d341270bae3
	I0514 00:17:25.003124    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:25.003124    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:25.003124    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:25.003124    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:25.003124    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:25 GMT
	I0514 00:17:25.004765    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:25.497396    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:25.497396    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:25.497396    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:25.497396    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:25.501304    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:25.501387    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:25.501387    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:25.501471    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:25.501524    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:25.501524    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:25.501524    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:25 GMT
	I0514 00:17:25.501524    4316 round_trippers.go:580]     Audit-Id: 5769d55b-baf5-46cf-8dc5-210854181aaf
	I0514 00:17:25.501524    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:26.001833    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:26.001975    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:26.001975    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:26.001975    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:26.011931    4316 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0514 00:17:26.011931    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:26.012675    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:26 GMT
	I0514 00:17:26.012675    4316 round_trippers.go:580]     Audit-Id: fdf514e0-2c25-4751-9a09-7b9df168026b
	I0514 00:17:26.012675    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:26.012675    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:26.012675    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:26.012675    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:26.013075    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:26.013920    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:26.499505    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:26.499505    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:26.499505    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:26.499505    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:26.503545    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:26.503545    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:26.504100    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:26.504100    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:26.504100    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:26 GMT
	I0514 00:17:26.504100    4316 round_trippers.go:580]     Audit-Id: 05d7835b-eac2-482b-b4e1-38dd2971ad48
	I0514 00:17:26.504100    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:26.504100    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:26.504503    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:26.996069    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:26.996069    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:26.996069    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:26.996176    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:27.004041    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:27.004093    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:27.004093    4316 round_trippers.go:580]     Audit-Id: 5742a2a6-01c0-4532-b55e-e43532408f92
	I0514 00:17:27.004093    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:27.004093    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:27.004093    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:27.004093    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:27.004093    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:27 GMT
	I0514 00:17:27.004093    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:27.497694    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:27.497797    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:27.497797    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:27.497797    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:27.501582    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:27.501582    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:27.501582    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:27 GMT
	I0514 00:17:27.501582    4316 round_trippers.go:580]     Audit-Id: 53ea86d1-ff2c-4219-955f-69164e50ba12
	I0514 00:17:27.501582    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:27.501582    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:27.501582    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:27.501582    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:27.502337    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:27.999944    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:27.999944    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:27.999944    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:27.999944    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:28.003969    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:28.003969    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:28.003969    4316 round_trippers.go:580]     Audit-Id: aedd3117-9185-42be-9ca5-dbf34ac0accd
	I0514 00:17:28.003969    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:28.003969    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:28.003969    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:28.003969    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:28.003969    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:28 GMT
	I0514 00:17:28.003969    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:28.499091    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:28.499452    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:28.499452    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:28.499570    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:28.503819    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:28.504123    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:28.504123    4316 round_trippers.go:580]     Audit-Id: 7b49abca-0ab8-4030-aa00-e3f6805f999f
	I0514 00:17:28.504123    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:28.504123    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:28.504220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:28.504220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:28.504220    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:28 GMT
	I0514 00:17:28.504584    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:28.505296    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:29.001123    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:29.001207    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:29.001207    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:29.001207    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:29.004425    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:29.004516    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:29.004516    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:29.004572    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:29 GMT
	I0514 00:17:29.004572    4316 round_trippers.go:580]     Audit-Id: 636522da-423f-4faf-8a08-900f96456c85
	I0514 00:17:29.004572    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:29.004572    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:29.004572    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:29.004822    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:29.500246    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:29.500456    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:29.500456    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:29.500456    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:29.504291    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:29.504291    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:29.504291    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:29.505248    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:29.505248    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:29.505248    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:29 GMT
	I0514 00:17:29.505248    4316 round_trippers.go:580]     Audit-Id: 733c4806-c1a8-4d11-9869-0dd64cb02ba6
	I0514 00:17:29.505248    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:29.505529    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:30.001139    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:30.001139    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:30.001313    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:30.001313    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:30.004627    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:30.004627    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:30.004627    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:30.005200    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:30.005200    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:30.005200    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:30.005200    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:30 GMT
	I0514 00:17:30.005200    4316 round_trippers.go:580]     Audit-Id: a5b4917c-6581-4585-ae74-a2192e69031c
	I0514 00:17:30.005515    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:30.503983    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:30.503983    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:30.503983    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:30.503983    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:30.509404    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:30.509404    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:30.509404    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:30 GMT
	I0514 00:17:30.509404    4316 round_trippers.go:580]     Audit-Id: de04191c-85cf-4518-9bd0-eaa1e90c242f
	I0514 00:17:30.509404    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:30.509404    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:30.509404    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:30.509404    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:30.510025    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:30.510682    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:30.999641    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:30.999641    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:30.999711    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:30.999711    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:31.008118    4316 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0514 00:17:31.008118    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:31.008118    4316 round_trippers.go:580]     Audit-Id: 4c862920-74d3-49f8-895a-cd8b9284790c
	I0514 00:17:31.008118    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:31.008118    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:31.008118    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:31.008118    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:31.008118    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:31 GMT
	I0514 00:17:31.008723    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:31.498866    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:31.499278    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:31.499278    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:31.499278    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:31.502529    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:31.503424    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:31.503424    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:31.503424    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:31.503424    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:31.503424    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:31.503424    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:31 GMT
	I0514 00:17:31.503424    4316 round_trippers.go:580]     Audit-Id: 214c2a75-2eae-45ff-b9a2-5f4d734eb068
	I0514 00:17:31.503590    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:31.994958    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:31.994958    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:31.994958    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:31.994958    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:31.998566    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:31.998863    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:31.998863    4316 round_trippers.go:580]     Audit-Id: 3da4648a-0265-4e07-8163-f148c8e88582
	I0514 00:17:31.998863    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:31.998863    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:31.998863    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:31.998863    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:31.998863    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:32 GMT
	I0514 00:17:31.998863    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:32.496175    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:32.496426    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:32.496426    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:32.496426    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:32.500299    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:32.500421    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:32.500421    4316 round_trippers.go:580]     Audit-Id: b96ce126-e0bf-43af-94c9-43590bcbbbfe
	I0514 00:17:32.500421    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:32.500421    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:32.500483    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:32.500483    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:32.500483    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:32 GMT
	I0514 00:17:32.500848    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:33.007423    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:33.007423    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:33.007423    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:33.007423    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:33.011877    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:33.011944    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:33.011944    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:33.011944    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:33.011944    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:33 GMT
	I0514 00:17:33.012013    4316 round_trippers.go:580]     Audit-Id: 2a7f09d6-d531-46b8-8b5e-1c7aac6609b0
	I0514 00:17:33.012013    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:33.012013    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:33.012261    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:33.013029    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:33.494994    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:33.495065    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:33.495136    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:33.495136    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:33.501513    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:33.501513    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:33.501513    4316 round_trippers.go:580]     Audit-Id: 69879bba-0b15-412c-b66e-4aef596b2aa1
	I0514 00:17:33.501513    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:33.501513    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:33.501513    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:33.501513    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:33.501513    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:33 GMT
	I0514 00:17:33.502208    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:34.007456    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:34.007524    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:34.007524    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:34.007601    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:34.010993    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:34.011466    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:34.011466    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:34.011466    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:34.011466    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:34.011466    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:34 GMT
	I0514 00:17:34.011466    4316 round_trippers.go:580]     Audit-Id: 0f7fa905-ee1d-40a4-8867-1d7b5cd76008
	I0514 00:17:34.011466    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:34.011703    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:34.507624    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:34.507624    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:34.507624    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:34.507624    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:34.511372    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:34.511536    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:34.511536    4316 round_trippers.go:580]     Audit-Id: a370dbda-fc73-4e5a-ab10-80e2e98429fb
	I0514 00:17:34.511536    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:34.511536    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:34.511536    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:34.511536    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:34.511536    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:34 GMT
	I0514 00:17:34.511691    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:35.003518    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:35.003518    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:35.003518    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:35.003610    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:35.010064    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:35.010064    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:35.010064    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:35.010064    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:35 GMT
	I0514 00:17:35.010064    4316 round_trippers.go:580]     Audit-Id: 18f3cc11-c3fb-4dc2-96df-ca23aaca2693
	I0514 00:17:35.010064    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:35.010064    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:35.010064    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:35.010064    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:35.503362    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:35.503429    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:35.503495    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:35.503495    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:35.507652    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:35.507652    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:35.507652    4316 round_trippers.go:580]     Audit-Id: 2ea09c9b-84e4-4471-b122-e78e9530a22c
	I0514 00:17:35.507652    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:35.508420    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:35.508420    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:35.508420    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:35.508420    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:35 GMT
	I0514 00:17:35.508691    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1761","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0514 00:17:35.509244    4316 node_ready.go:53] node "multinode-101100" has status "Ready":"False"
	I0514 00:17:35.997864    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:35.997864    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:35.997928    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:35.997928    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.004603    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:36.004603    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.004603    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.004603    4316 round_trippers.go:580]     Audit-Id: 96e1d5fc-daa4-45b2-bae7-f98520d72724
	I0514 00:17:36.004603    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.004603    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.004603    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.004603    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.004603    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:36.005856    4316 node_ready.go:49] node "multinode-101100" has status "Ready":"True"
	I0514 00:17:36.005906    4316 node_ready.go:38] duration metric: took 36.0113743s for node "multinode-101100" to be "Ready" ...
	I0514 00:17:36.005958    4316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:17:36.006019    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:17:36.006019    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:36.006019    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:36.006019    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.010618    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:36.010618    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.010618    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.010618    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.010618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.010618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.010618    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.010618    4316 round_trippers.go:580]     Audit-Id: 068cc6f3-d6ce-4793-a62e-e203dc47caf3
	I0514 00:17:36.012984    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1826"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87076 chars]
	I0514 00:17:36.016931    4316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:17:36.017061    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:36.017061    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:36.017061    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.017126    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:36.019703    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:36.019703    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.019703    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.019703    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.019703    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.019703    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.019703    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.019703    4316 round_trippers.go:580]     Audit-Id: 3b7203b0-9e6c-4a40-ae60-0c1565d9d0ae
	I0514 00:17:36.020697    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:36.021315    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:36.021315    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:36.021315    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:36.021372    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.023642    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:36.023642    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.023642    4316 round_trippers.go:580]     Audit-Id: 3668b121-ba24-4002-9e03-a51fb3200ba1
	I0514 00:17:36.023642    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.023642    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.023642    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.023642    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.023642    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.023642    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:36.527892    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:36.527892    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:36.527892    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:36.527892    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.531511    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:36.531662    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.531662    4316 round_trippers.go:580]     Audit-Id: 5f214ea3-d2b8-4129-9aed-5c6f5eba1019
	I0514 00:17:36.531662    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.531662    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.531662    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.531662    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.531662    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.531750    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:36.532464    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:36.532464    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:36.532464    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:36.532464    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:36.537113    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:36.537113    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:36.537113    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:36.537113    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:36.537113    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:36.537113    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:36 GMT
	I0514 00:17:36.537113    4316 round_trippers.go:580]     Audit-Id: 14adb0c8-9869-49f9-a0c9-c319876164f8
	I0514 00:17:36.537113    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:36.537652    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:37.027356    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:37.027608    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:37.027686    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:37.027686    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:37.031944    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:37.031944    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:37.031944    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:37.031944    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:37.031944    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:37 GMT
	I0514 00:17:37.031944    4316 round_trippers.go:580]     Audit-Id: 44431413-c014-4d24-9cdf-ab2569815d98
	I0514 00:17:37.032079    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:37.032079    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:37.032502    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:37.033829    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:37.033903    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:37.033903    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:37.033903    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:37.036639    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:37.036935    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:37.036935    4316 round_trippers.go:580]     Audit-Id: 0350951a-02b2-4fd2-b3bb-08335425abee
	I0514 00:17:37.036935    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:37.036935    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:37.036935    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:37.036935    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:37.036935    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:37 GMT
	I0514 00:17:37.037344    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:37.527783    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:37.527865    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:37.527865    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:37.527865    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:37.531713    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:37.531713    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:37.531713    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:37.531713    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:37 GMT
	I0514 00:17:37.531713    4316 round_trippers.go:580]     Audit-Id: 1fafa712-1225-4491-973c-42c8fc84a4b1
	I0514 00:17:37.531713    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:37.531713    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:37.531713    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:37.531713    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:37.532869    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:37.532869    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:37.532869    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:37.532869    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:37.535419    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:37.535419    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:37.535419    4316 round_trippers.go:580]     Audit-Id: 7157764a-c376-403e-bcd1-3f311b4bb645
	I0514 00:17:37.535419    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:37.535419    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:37.535419    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:37.535419    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:37.535968    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:37 GMT
	I0514 00:17:37.536223    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:38.024639    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:38.024694    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:38.024726    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:38.024726    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:38.028832    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:38.028832    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:38.028832    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:38 GMT
	I0514 00:17:38.028832    4316 round_trippers.go:580]     Audit-Id: 36a59dd7-5add-4132-ab14-00a074e5e56f
	I0514 00:17:38.028832    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:38.028832    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:38.028832    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:38.028832    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:38.028832    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:38.029690    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:38.029690    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:38.029690    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:38.029753    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:38.032797    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:38.032853    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:38.032853    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:38.032853    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:38.032853    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:38.032853    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:38 GMT
	I0514 00:17:38.032853    4316 round_trippers.go:580]     Audit-Id: a9a5cfc2-2338-4326-9faa-bf514c981ab8
	I0514 00:17:38.032853    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:38.032853    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:38.033553    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:38.524972    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:38.524972    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:38.525270    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:38.525270    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:38.529619    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:38.530439    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:38.530439    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:38 GMT
	I0514 00:17:38.530439    4316 round_trippers.go:580]     Audit-Id: feece2ad-cb70-476d-9b9a-d39e71a5f295
	I0514 00:17:38.530439    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:38.530439    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:38.530439    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:38.530439    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:38.530991    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:38.532104    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:38.532104    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:38.532104    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:38.532189    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:38.535396    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:38.535396    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:38.535396    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:38.535396    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:38 GMT
	I0514 00:17:38.535617    4316 round_trippers.go:580]     Audit-Id: 1bde8587-06b0-4adc-bfcc-1f6819def8cc
	I0514 00:17:38.535723    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:38.535723    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:38.535723    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:38.536119    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:39.026266    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:39.026266    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:39.026266    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:39.026266    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:39.030183    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:39.030183    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:39.030183    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:39.030183    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:39.030183    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:39.030183    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:39 GMT
	I0514 00:17:39.030183    4316 round_trippers.go:580]     Audit-Id: e5255604-5444-4f2b-a83e-7b747f867314
	I0514 00:17:39.030183    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:39.030183    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:39.031075    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:39.031151    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:39.031151    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:39.031151    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:39.033360    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:39.034295    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:39.034295    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:39.034295    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:39.034295    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:39.034295    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:39 GMT
	I0514 00:17:39.034295    4316 round_trippers.go:580]     Audit-Id: 6caf351d-6630-435c-ad7d-84d2810267f4
	I0514 00:17:39.034295    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:39.035361    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:39.522968    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:39.522968    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:39.522968    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:39.522968    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:39.528757    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:39.528757    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:39.528757    4316 round_trippers.go:580]     Audit-Id: 19e650ad-0907-4c36-b29b-8a1199516028
	I0514 00:17:39.528757    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:39.528757    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:39.528757    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:39.528757    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:39.528757    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:39 GMT
	I0514 00:17:39.528757    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:39.529969    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:39.530022    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:39.530070    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:39.530070    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:39.532876    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:39.532876    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:39.532876    4316 round_trippers.go:580]     Audit-Id: 6d2e56a7-9bac-4803-99e8-e1c49670b829
	I0514 00:17:39.532876    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:39.532876    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:39.532876    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:39.532876    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:39.532876    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:39 GMT
	I0514 00:17:39.534036    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:40.026994    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:40.027078    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:40.027078    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:40.027078    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:40.030390    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:40.030390    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:40.030390    4316 round_trippers.go:580]     Audit-Id: 658b85ed-ea30-4dd0-94ff-8006fae55e98
	I0514 00:17:40.030390    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:40.030390    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:40.030390    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:40.030390    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:40.030390    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:40 GMT
	I0514 00:17:40.031146    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:40.032140    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:40.032140    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:40.032249    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:40.032249    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:40.035718    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:40.035718    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:40.035718    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:40.035718    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:40.035718    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:40.035718    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:40.035718    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:40 GMT
	I0514 00:17:40.035718    4316 round_trippers.go:580]     Audit-Id: be2e52f6-a7a5-4357-8179-6c5b3aa5e955
	I0514 00:17:40.035718    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:40.035718    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:40.526870    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:40.527181    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:40.527181    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:40.527181    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:40.530385    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:40.530385    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:40.530385    4316 round_trippers.go:580]     Audit-Id: b7d7852c-b5e2-4e7b-8ac0-445fc8ec8aa8
	I0514 00:17:40.530385    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:40.530385    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:40.530385    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:40.530385    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:40.530385    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:40 GMT
	I0514 00:17:40.531484    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:40.531652    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:40.531652    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:40.532180    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:40.532180    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:40.539969    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:40.539969    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:40.539969    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:40.539969    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:40 GMT
	I0514 00:17:40.539969    4316 round_trippers.go:580]     Audit-Id: 51f8fb00-260e-472c-b6ce-cebcd4657b9f
	I0514 00:17:40.539969    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:40.539969    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:40.539969    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:40.539969    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:41.027667    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:41.027976    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:41.027976    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:41.027976    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:41.034298    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:41.034590    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:41.034590    4316 round_trippers.go:580]     Audit-Id: 6e465e73-5fde-4b03-a6ef-8a76d9d0a0ea
	I0514 00:17:41.034590    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:41.034590    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:41.034590    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:41.034590    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:41.034590    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:41 GMT
	I0514 00:17:41.034816    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:41.035391    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:41.035490    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:41.035490    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:41.035490    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:41.038660    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:41.038660    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:41.038815    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:41.038815    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:41.038815    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:41 GMT
	I0514 00:17:41.038815    4316 round_trippers.go:580]     Audit-Id: fc8e2b6c-50ab-4e41-931a-b5e3591a74fe
	I0514 00:17:41.038815    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:41.038815    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:41.039216    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:41.527583    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:41.527583    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:41.527583    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:41.527583    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:41.531258    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:41.531258    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:41.531258    4316 round_trippers.go:580]     Audit-Id: 43de37d2-42d4-49ed-b700-dbc652fc88df
	I0514 00:17:41.531258    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:41.531258    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:41.531258    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:41.531258    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:41.531258    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:41 GMT
	I0514 00:17:41.532017    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:41.533115    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:41.533115    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:41.533193    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:41.533193    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:41.538360    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:41.538360    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:41.538360    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:41.538360    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:41.538360    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:41.538360    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:41.538360    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:41 GMT
	I0514 00:17:41.538360    4316 round_trippers.go:580]     Audit-Id: 177c2e4f-ef92-4c7d-af2c-f0740ad67947
	I0514 00:17:41.539542    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:42.024220    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:42.024441    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:42.024441    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:42.024796    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:42.030251    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:42.030251    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:42.030251    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:42.030251    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:42.030251    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:42.030251    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:42 GMT
	I0514 00:17:42.030251    4316 round_trippers.go:580]     Audit-Id: ba56d365-23e5-43cf-b1bb-d17c09b50685
	I0514 00:17:42.030251    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:42.030901    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:42.032706    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:42.032706    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:42.032706    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:42.032706    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:42.035288    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:42.035288    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:42.035288    4316 round_trippers.go:580]     Audit-Id: 62a789cc-ae0c-46fd-a0c3-6620dabddcbd
	I0514 00:17:42.035288    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:42.035288    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:42.035288    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:42.035288    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:42.035288    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:42 GMT
	I0514 00:17:42.036203    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:42.036629    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:42.523370    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:42.523370    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:42.523370    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:42.523370    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:42.527812    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:42.527812    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:42.527812    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:42.527812    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:42.527812    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:42.527812    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:42 GMT
	I0514 00:17:42.527812    4316 round_trippers.go:580]     Audit-Id: 50de682e-f460-4d63-adb4-c8e0fa22b8dd
	I0514 00:17:42.527812    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:42.527812    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:42.529169    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:42.529222    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:42.529222    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:42.529222    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:42.531409    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:42.532238    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:42.532238    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:42.532238    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:42 GMT
	I0514 00:17:42.532238    4316 round_trippers.go:580]     Audit-Id: 32228d06-fbcd-42e6-9a2b-36415764043b
	I0514 00:17:42.532238    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:42.532238    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:42.532238    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:42.532439    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:43.023658    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:43.023658    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:43.023658    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:43.023658    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:43.027316    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:43.028364    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:43.028364    4316 round_trippers.go:580]     Audit-Id: cac3d502-1a69-41ef-a6ae-83a4149eec8a
	I0514 00:17:43.028438    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:43.028438    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:43.028438    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:43.028438    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:43.028438    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:43 GMT
	I0514 00:17:43.028841    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:43.029541    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:43.029541    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:43.029541    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:43.029541    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:43.035874    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:43.035874    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:43.035874    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:43.035874    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:43.035874    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:43.035874    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:43 GMT
	I0514 00:17:43.035874    4316 round_trippers.go:580]     Audit-Id: 7d057147-eeb6-49ea-8529-1c1f0753a3b5
	I0514 00:17:43.035874    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:43.035874    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:43.522358    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:43.522358    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:43.522358    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:43.522358    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:43.525993    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:43.525993    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:43.525993    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:43.526544    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:43 GMT
	I0514 00:17:43.526544    4316 round_trippers.go:580]     Audit-Id: 0c69215c-1ad9-413a-8a0b-c4570170e003
	I0514 00:17:43.526544    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:43.526544    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:43.526544    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:43.526809    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:43.527500    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:43.527500    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:43.527580    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:43.527580    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:43.530572    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:43.530572    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:43.530572    4316 round_trippers.go:580]     Audit-Id: b845f9d0-8394-4e1a-aa59-46a48ee02697
	I0514 00:17:43.530572    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:43.530572    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:43.530572    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:43.530572    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:43.530572    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:43 GMT
	I0514 00:17:43.530572    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:44.021694    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:44.021817    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:44.021817    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:44.021817    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:44.024993    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:44.024993    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:44.024993    4316 round_trippers.go:580]     Audit-Id: 1fa03107-1dbf-4e37-a0de-66594064883e
	I0514 00:17:44.024993    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:44.024993    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:44.024993    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:44.024993    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:44.024993    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:44 GMT
	I0514 00:17:44.025616    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:44.026464    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:44.026551    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:44.026551    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:44.026551    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:44.028699    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:44.028699    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:44.028699    4316 round_trippers.go:580]     Audit-Id: 3630a112-874d-446b-8f06-8b3bce332d82
	I0514 00:17:44.028699    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:44.028699    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:44.029256    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:44.029256    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:44.029256    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:44 GMT
	I0514 00:17:44.029528    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:44.519859    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:44.519963    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:44.519963    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:44.519963    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:44.523665    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:44.523665    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:44.523665    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:44.523777    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:44.523777    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:44.523777    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:44 GMT
	I0514 00:17:44.523777    4316 round_trippers.go:580]     Audit-Id: 77ecb428-4c35-4383-9252-5aa64bc134d0
	I0514 00:17:44.523777    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:44.524048    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:44.525059    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:44.525059    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:44.525059    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:44.525132    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:44.531638    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:44.531638    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:44.531638    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:44.531638    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:44.531638    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:44.531638    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:44 GMT
	I0514 00:17:44.531638    4316 round_trippers.go:580]     Audit-Id: 05725037-a9aa-4e85-a952-e155e9475017
	I0514 00:17:44.531638    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:44.532165    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:44.532348    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:45.018533    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:45.018533    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:45.018533    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:45.018533    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:45.025787    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:45.025787    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:45.025787    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:45.025787    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:45.025787    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:45 GMT
	I0514 00:17:45.025787    4316 round_trippers.go:580]     Audit-Id: 51a76f65-7d7e-40a8-96ed-3bca41035f4a
	I0514 00:17:45.025787    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:45.025787    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:45.026332    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:45.027299    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:45.027299    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:45.027299    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:45.027299    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:45.029876    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:45.029876    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:45.029876    4316 round_trippers.go:580]     Audit-Id: 37f5dd10-668a-415b-9c60-819edb09b861
	I0514 00:17:45.029876    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:45.029876    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:45.030819    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:45.030819    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:45.030819    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:45 GMT
	I0514 00:17:45.031098    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:45.531320    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:45.531427    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:45.531427    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:45.531427    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:45.534320    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:45.534320    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:45.534320    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:45.534320    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:45 GMT
	I0514 00:17:45.534320    4316 round_trippers.go:580]     Audit-Id: 6230fcab-0584-4536-b8d6-e27e3a0859ce
	I0514 00:17:45.534320    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:45.534320    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:45.534320    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:45.534320    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:45.536830    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:45.536830    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:45.536830    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:45.536830    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:45.539658    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:45.539658    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:45.539658    4316 round_trippers.go:580]     Audit-Id: 2cab55b6-3170-428b-8848-1b08d5116ca2
	I0514 00:17:45.539658    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:45.539658    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:45.539658    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:45.539658    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:45.539658    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:45 GMT
	I0514 00:17:45.541017    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:46.029355    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:46.029355    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:46.029355    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:46.029355    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:46.033948    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:46.033948    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:46.033948    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:46.033948    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:46.033948    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:46.033948    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:46.033948    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:46 GMT
	I0514 00:17:46.033948    4316 round_trippers.go:580]     Audit-Id: 4ac9e535-bdef-4c29-81e3-32122b13d977
	I0514 00:17:46.034318    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:46.034927    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:46.035036    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:46.035036    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:46.035036    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:46.037370    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:46.038371    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:46.038371    4316 round_trippers.go:580]     Audit-Id: fee4d1d2-aac3-4605-8a65-3a60ded8c698
	I0514 00:17:46.038371    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:46.038453    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:46.038453    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:46.038453    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:46.038453    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:46 GMT
	I0514 00:17:46.038578    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:46.530963    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:46.531194    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:46.531194    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:46.531194    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:46.535661    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:46.535661    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:46.535661    4316 round_trippers.go:580]     Audit-Id: 7252731b-509f-4a5b-b48a-3d9d9645275b
	I0514 00:17:46.535661    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:46.535661    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:46.535661    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:46.535661    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:46.535661    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:46 GMT
	I0514 00:17:46.535661    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:46.536536    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:46.536536    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:46.536536    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:46.536536    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:46.543062    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:46.543062    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:46.543062    4316 round_trippers.go:580]     Audit-Id: d57ff020-e9f7-4f53-874e-5f0bbb759d3b
	I0514 00:17:46.543062    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:46.543062    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:46.543062    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:46.543062    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:46.543062    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:46 GMT
	I0514 00:17:46.543062    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:46.543705    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:47.028493    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:47.028739    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:47.028739    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:47.028739    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:47.031945    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:47.031945    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:47.031945    4316 round_trippers.go:580]     Audit-Id: 0a9d3b0d-2bc0-43b6-b14f-f2032c54e4c6
	I0514 00:17:47.031945    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:47.032859    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:47.032859    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:47.032859    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:47.032859    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:47 GMT
	I0514 00:17:47.036995    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:47.038025    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:47.038025    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:47.038025    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:47.038025    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:47.040443    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:47.040443    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:47.040443    4316 round_trippers.go:580]     Audit-Id: 5c129fa9-e0d3-4193-a3d7-729410f27adf
	I0514 00:17:47.040443    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:47.040443    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:47.040443    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:47.040443    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:47.040443    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:47 GMT
	I0514 00:17:47.041351    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:47.527120    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:47.527120    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:47.527120    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:47.527120    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:47.532151    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:47.532151    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:47.532151    4316 round_trippers.go:580]     Audit-Id: dd0cabbf-1337-46d3-b794-9135acdd220a
	I0514 00:17:47.532151    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:47.532151    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:47.532151    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:47.532151    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:47.532151    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:47 GMT
	I0514 00:17:47.532938    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:47.534042    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:47.534134    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:47.534134    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:47.534134    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:47.536979    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:47.536979    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:47.536979    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:47 GMT
	I0514 00:17:47.536979    4316 round_trippers.go:580]     Audit-Id: 37c130d7-d5d6-4918-af2d-da47f93de7bd
	I0514 00:17:47.536979    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:47.536979    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:47.536979    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:47.536979    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:47.537619    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:48.025944    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:48.025944    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:48.026028    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:48.026028    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:48.030327    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:48.030618    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:48.030618    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:48 GMT
	I0514 00:17:48.030618    4316 round_trippers.go:580]     Audit-Id: 5c0285b7-949f-4a03-90b7-9fba17059dbe
	I0514 00:17:48.030618    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:48.030618    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:48.030618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:48.030618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:48.030618    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:48.031224    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:48.031224    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:48.031224    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:48.031224    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:48.036454    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:48.036535    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:48.036535    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:48.036664    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:48.036664    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:48.036664    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:48 GMT
	I0514 00:17:48.036664    4316 round_trippers.go:580]     Audit-Id: d0554898-18a1-4ab4-8efe-68db0b53637e
	I0514 00:17:48.036664    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:48.036664    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:48.524363    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:48.524441    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:48.524441    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:48.524441    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:48.527732    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:48.527840    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:48.527840    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:48 GMT
	I0514 00:17:48.527840    4316 round_trippers.go:580]     Audit-Id: aa1513ea-3bd5-44ca-83df-a4cc0909b7e5
	I0514 00:17:48.527840    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:48.527840    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:48.527840    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:48.527840    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:48.528050    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:48.528687    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:48.528687    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:48.528775    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:48.528775    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:48.530906    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:48.530906    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:48.530906    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:48.530906    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:48.530906    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:48.530906    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:48 GMT
	I0514 00:17:48.530906    4316 round_trippers.go:580]     Audit-Id: 018f832b-ed87-4a1e-9c55-c079465cbe8c
	I0514 00:17:48.530906    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:48.532505    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:49.026265    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:49.026265    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:49.026265    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:49.026265    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:49.029715    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:49.029715    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:49.029715    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:49.029715    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:49 GMT
	I0514 00:17:49.029715    4316 round_trippers.go:580]     Audit-Id: 642aa73d-4189-4abe-b133-39a86a797e34
	I0514 00:17:49.029715    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:49.029715    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:49.030587    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:49.030827    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:49.031840    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:49.031924    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:49.031924    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:49.031924    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:49.034655    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:49.035513    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:49.035513    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:49.035513    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:49.035513    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:49 GMT
	I0514 00:17:49.035513    4316 round_trippers.go:580]     Audit-Id: da6c4825-b264-463b-bb57-9cb4029bc1d4
	I0514 00:17:49.035513    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:49.035513    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:49.035513    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:49.036655    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:49.523250    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:49.523250    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:49.523250    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:49.523250    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:49.526943    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:49.526943    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:49.526943    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:49 GMT
	I0514 00:17:49.526943    4316 round_trippers.go:580]     Audit-Id: 9deea6e8-5108-4a83-af6e-0ecbffbef704
	I0514 00:17:49.526943    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:49.526943    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:49.526943    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:49.526943    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:49.527537    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:49.528580    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:49.528580    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:49.528698    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:49.528698    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:49.531027    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:49.531027    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:49.531027    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:49.531027    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:49 GMT
	I0514 00:17:49.531027    4316 round_trippers.go:580]     Audit-Id: 0aa5b480-c8d6-4e43-8e08-e5d6df13934a
	I0514 00:17:49.531027    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:49.531027    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:49.531027    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:49.531845    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:50.025361    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:50.025361    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:50.025472    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:50.025472    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:50.029062    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:50.029062    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:50.029062    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:50.029062    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:50.029062    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:50.029062    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:50 GMT
	I0514 00:17:50.029062    4316 round_trippers.go:580]     Audit-Id: 056292cc-f7c7-44b1-975f-a9ab1dc1c8d3
	I0514 00:17:50.029062    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:50.029471    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:50.030226    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:50.030226    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:50.030226    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:50.030226    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:50.032516    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:50.032516    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:50.032516    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:50.032516    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:50.032516    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:50.032516    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:50 GMT
	I0514 00:17:50.032516    4316 round_trippers.go:580]     Audit-Id: b49ea68b-ba4a-44dd-9f09-77598f4a3550
	I0514 00:17:50.032516    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:50.033507    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:50.527792    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:50.528094    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:50.528094    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:50.528094    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:50.533474    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:17:50.533474    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:50.533474    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:50.533474    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:50.533474    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:50.533474    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:50 GMT
	I0514 00:17:50.533474    4316 round_trippers.go:580]     Audit-Id: edb67c8b-4091-4fc6-b7a3-4ab0b3702afe
	I0514 00:17:50.533474    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:50.534010    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:50.534687    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:50.534687    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:50.534687    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:50.534687    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:50.536872    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:50.536872    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:50.536872    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:50.536872    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:50 GMT
	I0514 00:17:50.537786    4316 round_trippers.go:580]     Audit-Id: b2c24719-a758-4715-b0a1-6ad73d86d33f
	I0514 00:17:50.537786    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:50.537786    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:50.537786    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:50.538018    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:51.026950    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:51.027030    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:51.027030    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:51.027030    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:51.030389    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:51.030389    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:51.030389    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:51 GMT
	I0514 00:17:51.030389    4316 round_trippers.go:580]     Audit-Id: 98bc0eee-b424-4261-8299-d6d2273fd477
	I0514 00:17:51.030389    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:51.030389    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:51.030389    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:51.030389    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:51.031159    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:51.031785    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:51.031785    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:51.031785    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:51.031785    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:51.038960    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:51.038960    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:51.038960    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:51.038960    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:51.039722    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:51.039722    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:51 GMT
	I0514 00:17:51.039722    4316 round_trippers.go:580]     Audit-Id: d347b7a8-22be-418c-b29d-1c73e6a6cb47
	I0514 00:17:51.039722    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:51.039759    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:51.039759    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:51.523392    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:51.523490    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:51.523490    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:51.523490    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:51.527912    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:51.528020    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:51.528020    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:51.528020    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:51.528020    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:51 GMT
	I0514 00:17:51.528020    4316 round_trippers.go:580]     Audit-Id: efcd26a9-206e-4a12-b875-59c1b9a56667
	I0514 00:17:51.528020    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:51.528122    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:51.528382    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:51.528952    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:51.529014    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:51.529014    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:51.529014    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:51.532031    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:51.532031    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:51.532031    4316 round_trippers.go:580]     Audit-Id: e05f1928-9e76-42ab-b917-e5b46fe2af7b
	I0514 00:17:51.532031    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:51.532031    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:51.532031    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:51.532508    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:51.532508    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:51 GMT
	I0514 00:17:51.532858    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:52.022691    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:52.023148    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:52.023148    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:52.023148    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:52.027052    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:52.027052    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:52.027052    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:52.027052    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:52.027185    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:52.027185    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:52 GMT
	I0514 00:17:52.027185    4316 round_trippers.go:580]     Audit-Id: 46cf10a7-354f-4baf-a15a-e8397b0e1ded
	I0514 00:17:52.027185    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:52.027390    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:52.028263    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:52.028263    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:52.028263    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:52.028263    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:52.031481    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:52.031559    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:52.031559    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:52 GMT
	I0514 00:17:52.031559    4316 round_trippers.go:580]     Audit-Id: 9f69a66e-5696-4b63-a090-654f65b81422
	I0514 00:17:52.031628    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:52.031628    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:52.031660    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:52.031660    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:52.032201    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:52.523056    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:52.523056    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:52.523056    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:52.523056    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:52.526681    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:52.527238    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:52.527238    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:52 GMT
	I0514 00:17:52.527324    4316 round_trippers.go:580]     Audit-Id: dc3a466e-3935-4ecc-bb26-6b4b0364121f
	I0514 00:17:52.527324    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:52.527324    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:52.527324    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:52.527324    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:52.527478    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:52.528610    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:52.528610    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:52.528610    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:52.528699    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:52.531421    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:52.531756    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:52.531756    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:52 GMT
	I0514 00:17:52.531756    4316 round_trippers.go:580]     Audit-Id: af408f18-ae4f-44bf-988d-fbca2c6bf110
	I0514 00:17:52.531756    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:52.531756    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:52.531756    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:52.531756    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:52.531756    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:53.021593    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:53.021593    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:53.021593    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:53.021593    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:53.030886    4316 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0514 00:17:53.030886    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:53.030886    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:53.030886    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:53.030886    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:53.030886    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:53 GMT
	I0514 00:17:53.030886    4316 round_trippers.go:580]     Audit-Id: d5198b13-1397-4ebd-a609-4c9adfdcaa37
	I0514 00:17:53.030886    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:53.031480    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:53.031559    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:53.032089    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:53.032089    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:53.032127    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:53.034968    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:53.035117    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:53.035117    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:53 GMT
	I0514 00:17:53.035117    4316 round_trippers.go:580]     Audit-Id: 863871e3-cb22-4b48-ab59-9e78835abc08
	I0514 00:17:53.035117    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:53.035117    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:53.035117    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:53.035117    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:53.035117    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:53.521051    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:53.521051    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:53.521051    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:53.521051    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:53.524610    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:53.525142    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:53.525210    4316 round_trippers.go:580]     Audit-Id: 0a620f09-3bdd-45d6-8a96-2d80f3819fc8
	I0514 00:17:53.525210    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:53.525210    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:53.525210    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:53.525210    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:53.525210    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:53 GMT
	I0514 00:17:53.525417    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:53.526148    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:53.526173    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:53.526173    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:53.526173    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:53.530946    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:53.531092    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:53.531118    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:53 GMT
	I0514 00:17:53.531118    4316 round_trippers.go:580]     Audit-Id: 466c4b1e-c869-4a79-b505-3aaa73af7b4a
	I0514 00:17:53.531118    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:53.531118    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:53.531118    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:53.531118    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:53.531118    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:53.531847    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:54.025489    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:54.025489    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:54.025598    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:54.025598    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:54.030382    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:54.030382    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:54.030382    4316 round_trippers.go:580]     Audit-Id: b47d5652-2f48-45dd-baf2-ee76a3ece10a
	I0514 00:17:54.030382    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:54.030382    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:54.030382    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:54.030382    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:54.030382    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:54 GMT
	I0514 00:17:54.031721    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:54.032817    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:54.032817    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:54.032890    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:54.032890    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:54.037185    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:54.037185    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:54.037185    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:54.037185    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:54.037185    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:54.037185    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:54 GMT
	I0514 00:17:54.037185    4316 round_trippers.go:580]     Audit-Id: 9ac2b6ef-177a-4417-97fe-e3b597af0f9b
	I0514 00:17:54.037185    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:54.038153    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:54.522088    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:54.522088    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:54.522088    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:54.522088    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:54.525618    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:54.525618    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:54.525618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:54.525618    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:54 GMT
	I0514 00:17:54.525618    4316 round_trippers.go:580]     Audit-Id: ed1231eb-95a1-4ec2-a6ee-d52675b1c727
	I0514 00:17:54.525618    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:54.525618    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:54.525618    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:54.527020    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:54.527870    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:54.527870    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:54.527870    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:54.527870    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:54.535265    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:17:54.535265    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:54.535265    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:54 GMT
	I0514 00:17:54.535265    4316 round_trippers.go:580]     Audit-Id: c485bb88-e899-4e8b-94dc-d819aee6a7d4
	I0514 00:17:54.535265    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:54.535265    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:54.535265    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:54.535265    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:54.535265    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:55.033364    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:55.033364    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:55.033364    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:55.033364    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:55.038301    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:55.038392    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:55.038392    4316 round_trippers.go:580]     Audit-Id: 63ece2ae-a610-4321-8d8c-e032e79b23d7
	I0514 00:17:55.038392    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:55.038392    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:55.038392    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:55.038392    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:55.038392    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:55 GMT
	I0514 00:17:55.038651    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:55.039811    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:55.039811    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:55.039897    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:55.039897    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:55.043267    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:55.043267    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:55.043267    4316 round_trippers.go:580]     Audit-Id: b67c0317-b8c3-40f2-bbd2-4aba63e33cd7
	I0514 00:17:55.043267    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:55.043267    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:55.043267    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:55.043267    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:55.043267    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:55 GMT
	I0514 00:17:55.043528    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:55.529916    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:55.529998    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:55.529998    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:55.529998    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:55.533310    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:55.533310    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:55.533310    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:55.533310    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:55.533310    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:55.533310    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:55 GMT
	I0514 00:17:55.533310    4316 round_trippers.go:580]     Audit-Id: 4f3ab88d-95b2-41ef-ba4d-70a22f28f91a
	I0514 00:17:55.533310    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:55.534150    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:55.534861    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:55.534861    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:55.534861    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:55.534918    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:55.537797    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:55.537857    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:55.537857    4316 round_trippers.go:580]     Audit-Id: 5b734f73-adce-46db-8245-6015aa6cbc02
	I0514 00:17:55.537857    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:55.537857    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:55.537896    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:55.537896    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:55.537896    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:55 GMT
	I0514 00:17:55.538002    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:55.538002    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:56.026948    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:56.027296    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:56.027296    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:56.027296    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:56.031178    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:56.031178    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:56.031178    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:56.031178    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:56 GMT
	I0514 00:17:56.031178    4316 round_trippers.go:580]     Audit-Id: d09419d3-13e1-4567-afaa-949a552f4f07
	I0514 00:17:56.031178    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:56.031265    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:56.031265    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:56.031265    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:56.032908    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:56.032992    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:56.032992    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:56.032992    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:56.036385    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:56.036385    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:56.036716    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:56.036716    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:56.036716    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:56.036716    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:56.036716    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:56 GMT
	I0514 00:17:56.036716    4316 round_trippers.go:580]     Audit-Id: e990ca4d-52bc-4ce8-b7ac-aa5512ac0ece
	I0514 00:17:56.036835    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:56.526963    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:56.526963    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:56.526963    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:56.526963    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:56.530895    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:56.530895    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:56.530895    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:56.530895    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:56 GMT
	I0514 00:17:56.530992    4316 round_trippers.go:580]     Audit-Id: adf99547-9dbd-485e-a3cd-4570031c5388
	I0514 00:17:56.530992    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:56.530992    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:56.530992    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:56.531186    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:56.532122    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:56.532122    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:56.532122    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:56.532122    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:56.534649    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:56.534649    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:56.534649    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:56.534649    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:56 GMT
	I0514 00:17:56.534649    4316 round_trippers.go:580]     Audit-Id: 95e68695-4c8c-47a4-bcbe-091d9e7ca165
	I0514 00:17:56.534649    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:56.535581    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:56.535581    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:56.535738    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:57.022063    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:57.022063    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:57.022063    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:57.022063    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:57.025716    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:57.025716    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:57.025716    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:57.025961    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:57.025961    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:57 GMT
	I0514 00:17:57.025961    4316 round_trippers.go:580]     Audit-Id: 68259e4b-4069-471b-a8b6-166e95a74498
	I0514 00:17:57.025961    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:57.025961    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:57.026103    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:57.026762    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:57.026762    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:57.026762    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:57.026762    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:57.029561    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:57.029561    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:57.029561    4316 round_trippers.go:580]     Audit-Id: 719b929f-0462-46cd-8554-0182ff5deb56
	I0514 00:17:57.029561    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:57.029561    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:57.029561    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:57.029561    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:57.029561    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:57 GMT
	I0514 00:17:57.030449    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:57.521718    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:57.521718    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:57.521718    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:57.521718    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:57.524837    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:57.524837    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:57.524837    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:57 GMT
	I0514 00:17:57.524837    4316 round_trippers.go:580]     Audit-Id: a5123dd3-6a71-45b4-929f-98de09033747
	I0514 00:17:57.524837    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:57.524837    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:57.524837    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:57.524837    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:57.525811    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:57.526499    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:57.526499    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:57.526499    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:57.526499    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:57.529860    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:57.529860    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:57.529860    4316 round_trippers.go:580]     Audit-Id: dd9b9175-3046-4168-87e4-ecbccf307082
	I0514 00:17:57.529860    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:57.529860    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:57.529860    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:57.529860    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:57.529994    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:57 GMT
	I0514 00:17:57.530421    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:58.023735    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:58.023735    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:58.023735    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:58.023735    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:58.028021    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:58.028462    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:58.028462    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:58.028462    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:58.028462    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:58.028462    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:58 GMT
	I0514 00:17:58.028462    4316 round_trippers.go:580]     Audit-Id: 3ad01ec4-c086-44bf-908c-a03eb33ea21d
	I0514 00:17:58.028462    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:58.028935    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:58.029809    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:58.029991    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:58.029991    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:58.029991    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:58.033352    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:58.033352    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:58.033352    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:58.033352    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:58.033352    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:58 GMT
	I0514 00:17:58.033352    4316 round_trippers.go:580]     Audit-Id: 6ed9c178-66b7-416e-985e-f90b677c332c
	I0514 00:17:58.033352    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:58.033352    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:58.034255    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:58.034756    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:17:58.523017    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:58.523017    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:58.523017    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:58.523017    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:58.526971    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:58.527044    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:58.527044    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:58.527044    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:58.527044    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:58.527044    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:58 GMT
	I0514 00:17:58.527129    4316 round_trippers.go:580]     Audit-Id: 10da8c0f-ad6d-4f65-8225-29ba4b0231a6
	I0514 00:17:58.527129    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:58.527407    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:58.528515    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:58.528593    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:58.528593    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:58.528593    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:58.531572    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:58.531572    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:58.531572    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:58.531572    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:58.531572    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:58 GMT
	I0514 00:17:58.531572    4316 round_trippers.go:580]     Audit-Id: b92384a7-0912-45d4-99ba-1addbaaf30c3
	I0514 00:17:58.531572    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:58.531572    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:58.532103    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:59.020580    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:59.020884    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:59.020884    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:59.020884    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:59.024921    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:17:59.025614    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:59.025614    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:59.025726    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:59.025726    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:59.025726    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:59 GMT
	I0514 00:17:59.025726    4316 round_trippers.go:580]     Audit-Id: a2fb6152-c040-4879-b53f-08f2bcfbc50a
	I0514 00:17:59.025726    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:59.025850    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:59.026944    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:59.027020    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:59.027082    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:59.027082    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:59.029904    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:17:59.029904    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:59.029904    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:59.029904    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:59.029904    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:59 GMT
	I0514 00:17:59.029904    4316 round_trippers.go:580]     Audit-Id: be1b58a7-12e4-48de-a7c8-744732e6b6db
	I0514 00:17:59.029904    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:59.029904    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:59.029904    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:17:59.519408    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:17:59.519649    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:59.519649    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:59.519649    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:59.523214    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:17:59.524188    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:59.524188    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:59.524188    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:59.524188    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:59.524188    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:59 GMT
	I0514 00:17:59.524188    4316 round_trippers.go:580]     Audit-Id: 85d171de-4c07-4851-b217-65dbffd5c873
	I0514 00:17:59.524290    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:59.524366    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:17:59.525028    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:17:59.525028    4316 round_trippers.go:469] Request Headers:
	I0514 00:17:59.525028    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:17:59.525551    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:17:59.531915    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:17:59.531915    4316 round_trippers.go:577] Response Headers:
	I0514 00:17:59.531915    4316 round_trippers.go:580]     Audit-Id: ff26dd51-776d-42ba-9ace-873308d21e37
	I0514 00:17:59.531915    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:17:59.531915    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:17:59.531915    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:17:59.531915    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:17:59.531915    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:17:59 GMT
	I0514 00:17:59.532507    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:00.025115    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:18:00.025115    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:00.025115    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:00.025173    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:00.028377    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:00.028377    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:00.028377    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:00.028377    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:00.028947    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:00.028947    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:00.028947    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:00 GMT
	I0514 00:18:00.028947    4316 round_trippers.go:580]     Audit-Id: 5659c070-8a0e-4100-be51-3155801ecefc
	I0514 00:18:00.029104    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:18:00.029878    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:00.029878    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:00.029967    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:00.029967    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:00.058381    4316 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0514 00:18:00.059359    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:00.059401    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:00.059401    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:00.059401    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:00.059401    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:00.059401    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:00 GMT
	I0514 00:18:00.059401    4316 round_trippers.go:580]     Audit-Id: 46df3cf4-5a07-4dca-abe8-bf1c00a5409b
	I0514 00:18:00.059901    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:00.059901    4316 pod_ready.go:102] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"False"
	I0514 00:18:00.524592    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:18:00.524592    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:00.524592    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:00.524592    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:00.528956    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:18:00.528956    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:00.528956    4316 round_trippers.go:580]     Audit-Id: 6b0d422d-1c7f-4d13-afa0-0bb07da07442
	I0514 00:18:00.528956    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:00.528956    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:00.528956    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:00.528956    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:00.528956    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:00 GMT
	I0514 00:18:00.530013    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:18:00.531146    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:00.531146    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:00.531225    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:00.531225    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:00.538426    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:18:00.539198    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:00.539198    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:00.539198    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:00.539198    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:00 GMT
	I0514 00:18:00.539198    4316 round_trippers.go:580]     Audit-Id: 08fde253-009a-4fa4-a7b3-70c265d850f1
	I0514 00:18:00.539198    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:00.539262    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:00.539439    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.033970    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:18:01.034341    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.034341    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.034341    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.037696    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:01.037696    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.037696    4316 round_trippers.go:580]     Audit-Id: e4350530-c24e-416d-b671-826d40a28a66
	I0514 00:18:01.038194    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.038194    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.038194    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.038194    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.038194    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.038409    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1715","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0514 00:18:01.039048    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:01.039048    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.039048    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.039048    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.043288    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:18:01.043975    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.043975    4316 round_trippers.go:580]     Audit-Id: cd43dbcf-4193-4f7c-8595-35b44eacd72b
	I0514 00:18:01.043975    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.044096    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.044096    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.044096    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.044096    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.044096    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.531936    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:18:01.531936    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.531936    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.531936    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.534663    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.535611    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.535611    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.535611    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.535611    4316 round_trippers.go:580]     Audit-Id: 4d7e7edb-687a-4196-a485-9b840fe63b11
	I0514 00:18:01.535611    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.535611    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.535611    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.535819    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0514 00:18:01.536407    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:01.536407    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.536407    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.536532    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.539719    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:01.539719    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.539719    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.539719    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.539820    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.539820    4316 round_trippers.go:580]     Audit-Id: 9bc018a7-bb5b-45af-9a9d-23eab50fdf69
	I0514 00:18:01.539820    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.539820    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.540050    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.540439    4316 pod_ready.go:92] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:01.540501    4316 pod_ready.go:81] duration metric: took 25.5219074s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.540501    4316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.540617    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-101100
	I0514 00:18:01.540617    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.540617    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.540617    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.543931    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.543931    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.543931    4316 round_trippers.go:580]     Audit-Id: a4d5238b-9208-4c3d-99ab-6ec97ec1b248
	I0514 00:18:01.543931    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.543931    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.543931    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.543931    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.543931    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.543931    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"74cd34fe-a56b-453d-afb3-a9db3db0d5ba","resourceVersion":"1779","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.122:2379","kubernetes.io/config.hash":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.mirror":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.seen":"2024-05-14T00:16:49.843121737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0514 00:18:01.543931    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:01.543931    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.543931    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.543931    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.547176    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:01.547176    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.547176    4316 round_trippers.go:580]     Audit-Id: 0bd1c733-6404-4efb-9feb-211c75cce9c6
	I0514 00:18:01.547176    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.547176    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.547176    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.547176    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.547176    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.547733    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.548182    4316 pod_ready.go:92] pod "etcd-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:01.548239    4316 pod_ready.go:81] duration metric: took 7.7376ms for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.548239    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.548297    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-101100
	I0514 00:18:01.548377    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.548377    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.548377    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.550708    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.550708    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.550708    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.550708    4316 round_trippers.go:580]     Audit-Id: e6549549-8b0a-465f-ae81-e4500ff8c23b
	I0514 00:18:01.550708    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.550708    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.550708    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.550708    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.551708    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-101100","namespace":"kube-system","uid":"60889645-4c2d-4cfc-b322-c0f1b6e34503","resourceVersion":"1775","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.122:8443","kubernetes.io/config.hash":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.mirror":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.seen":"2024-05-14T00:16:49.778409853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0514 00:18:01.551708    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:01.551708    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.551708    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.551708    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.554577    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.554935    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.554935    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.554935    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.554935    4316 round_trippers.go:580]     Audit-Id: 25f56a5f-ef6a-4957-a46d-45444bacea79
	I0514 00:18:01.554935    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.554935    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.554935    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.555140    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.555496    4316 pod_ready.go:92] pod "kube-apiserver-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:01.555496    4316 pod_ready.go:81] duration metric: took 7.1994ms for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.555496    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.555621    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-101100
	I0514 00:18:01.555621    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.555621    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.555621    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.557990    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.557990    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.557990    4316 round_trippers.go:580]     Audit-Id: fe1fefce-483d-44cd-b309-d34878a37069
	I0514 00:18:01.557990    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.557990    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.557990    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.557990    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.557990    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.558434    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-101100","namespace":"kube-system","uid":"1a74381a-7477-4fd3-b344-c4a230014f97","resourceVersion":"1752","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.mirror":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.seen":"2024-05-13T23:56:09.392106640Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0514 00:18:01.559028    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:01.559094    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.559094    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.559094    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.560992    4316 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0514 00:18:01.560992    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.560992    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.561620    4316 round_trippers.go:580]     Audit-Id: 6307f424-36a6-466e-9567-4fe96b8d38f6
	I0514 00:18:01.561620    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.561620    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.561620    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.561620    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.561832    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:01.561832    4316 pod_ready.go:92] pod "kube-controller-manager-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:01.561832    4316 pod_ready.go:81] duration metric: took 6.2693ms for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.561832    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.561832    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:18:01.561832    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.561832    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.561832    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.564515    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:01.565329    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.565329    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.565329    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.565329    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.565329    4316 round_trippers.go:580]     Audit-Id: e4d422cf-3312-4572-95b7-3cd989d5170b
	I0514 00:18:01.565329    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.565329    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.565644    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8zsgn","generateName":"kube-proxy-","namespace":"kube-system","uid":"af208cbd-fa8a-4822-9b19-dc30f63fa59c","resourceVersion":"1621","creationTimestamp":"2024-05-14T00:03:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0514 00:18:01.566193    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:18:01.566193    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.566193    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.566193    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.569952    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:01.569952    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.569952    4316 round_trippers.go:580]     Audit-Id: e8babbe8-c2d4-4bdf-9dda-6009c6329cda
	I0514 00:18:01.569952    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.569952    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.569952    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.569952    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.569952    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.569952    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"fd2d4a0b-dc97-4959-b2ba-0f51719ad2b3","resourceVersion":"1836","creationTimestamp":"2024-05-14T00:12:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_12_45_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:12:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0514 00:18:01.569952    4316 pod_ready.go:97] node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:18:01.569952    4316 pod_ready.go:81] duration metric: took 8.12ms for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	E0514 00:18:01.569952    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:18:01.569952    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:01.735815    4316 request.go:629] Waited for 165.8525ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:18:01.736053    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:18:01.736053    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.736053    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.736053    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.740492    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:18:01.740492    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.740492    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.740492    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.740492    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.740492    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:01 GMT
	I0514 00:18:01.740492    4316 round_trippers.go:580]     Audit-Id: cee2f8af-4f02-4a05-85ab-785fc8dcfbd3
	I0514 00:18:01.740492    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.741005    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-b25hq","generateName":"kube-proxy-","namespace":"kube-system","uid":"d39f5818-3e88-4162-a7ce-734ca28103bf","resourceVersion":"1641","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0514 00:18:01.941794    4316 request.go:629] Waited for 199.7542ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:18:01.941970    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:18:01.941970    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:01.941970    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:01.941970    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:01.948189    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:18:01.949105    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:01.949105    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:02 GMT
	I0514 00:18:01.949105    4316 round_trippers.go:580]     Audit-Id: 1d549463-bcf5-4662-b83f-0fb779213b5e
	I0514 00:18:01.949105    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:01.949105    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:01.949105    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:01.949105    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:01.949105    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85","resourceVersion":"1842","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_13T23_59_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4582 chars]
	I0514 00:18:01.949955    4316 pod_ready.go:97] node "multinode-101100-m02" hosting pod "kube-proxy-b25hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m02" has status "Ready":"Unknown"
	I0514 00:18:01.949955    4316 pod_ready.go:81] duration metric: took 379.9789ms for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	E0514 00:18:01.949955    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100-m02" hosting pod "kube-proxy-b25hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m02" has status "Ready":"Unknown"
	I0514 00:18:01.949955    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:02.143393    4316 request.go:629] Waited for 193.3172ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:18:02.143763    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:18:02.143763    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:02.143763    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:02.143763    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:02.147971    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:18:02.148220    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:02.148220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:02.148220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:02.148220    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:02 GMT
	I0514 00:18:02.148220    4316 round_trippers.go:580]     Audit-Id: c14fdba1-417a-4f85-939c-db933bba548d
	I0514 00:18:02.148220    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:02.148220    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:02.148360    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zhcz6","generateName":"kube-proxy-","namespace":"kube-system","uid":"a9a488af-41ba-47f3-87b0-5a2f062afad6","resourceVersion":"1732","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0514 00:18:02.332179    4316 request.go:629] Waited for 183.029ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:02.332355    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:02.332355    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:02.332355    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:02.332457    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:02.338298    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:18:02.338298    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:02.338298    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:02.338298    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:02 GMT
	I0514 00:18:02.338298    4316 round_trippers.go:580]     Audit-Id: 4b8848c3-b000-4713-88f5-f88264a7ce60
	I0514 00:18:02.338298    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:02.338298    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:02.338298    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:02.338298    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:02.339570    4316 pod_ready.go:92] pod "kube-proxy-zhcz6" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:02.339602    4316 pod_ready.go:81] duration metric: took 389.6226ms for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:02.339602    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:02.533116    4316 request.go:629] Waited for 193.3558ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:18:02.533580    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:18:02.533674    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:02.533674    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:02.533674    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:02.536976    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:02.536976    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:02.536976    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:02.536976    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:02.536976    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:02.536976    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:02.537231    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:02 GMT
	I0514 00:18:02.537231    4316 round_trippers.go:580]     Audit-Id: 90628ba1-abda-4268-9296-71c2992d3d08
	I0514 00:18:02.537492    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-101100","namespace":"kube-system","uid":"d7300c2d-377f-4061-bd34-5f7593b7e827","resourceVersion":"1756","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.mirror":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.seen":"2024-05-13T23:56:09.392108241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0514 00:18:02.733903    4316 request.go:629] Waited for 195.5807ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:02.733903    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:18:02.733903    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:02.733903    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:02.733903    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:02.737519    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:18:02.737519    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:02.737519    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:02.737519    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:02.737519    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:02.737519    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:02 GMT
	I0514 00:18:02.737519    4316 round_trippers.go:580]     Audit-Id: 61cf8449-5c39-452f-9021-4fb1e40d8ce9
	I0514 00:18:02.737519    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:02.738146    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:18:02.738146    4316 pod_ready.go:92] pod "kube-scheduler-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:18:02.738146    4316 pod_ready.go:81] duration metric: took 398.5183ms for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:18:02.738146    4316 pod_ready.go:38] duration metric: took 26.7304415s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:18:02.738146    4316 api_server.go:52] waiting for apiserver process to appear ...
	I0514 00:18:02.745047    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0514 00:18:02.763952    4316 command_runner.go:130] > da9e6534cd87
	I0514 00:18:02.763952    4316 logs.go:276] 1 containers: [da9e6534cd87]
	I0514 00:18:02.770566    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0514 00:18:02.786853    4316 command_runner.go:130] > 08450c853590
	I0514 00:18:02.788436    4316 logs.go:276] 1 containers: [08450c853590]
	I0514 00:18:02.794094    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0514 00:18:02.812312    4316 command_runner.go:130] > dcc5a109288b
	I0514 00:18:02.812566    4316 command_runner.go:130] > 76c5ab7859ef
	I0514 00:18:02.813606    4316 logs.go:276] 2 containers: [dcc5a109288b 76c5ab7859ef]
	I0514 00:18:02.819195    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0514 00:18:02.840176    4316 command_runner.go:130] > d3581c1c570c
	I0514 00:18:02.840855    4316 command_runner.go:130] > 964887fc5d36
	I0514 00:18:02.841379    4316 logs.go:276] 2 containers: [d3581c1c570c 964887fc5d36]
	I0514 00:18:02.847426    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0514 00:18:02.865799    4316 command_runner.go:130] > b2a1b31cd7de
	I0514 00:18:02.865799    4316 command_runner.go:130] > 91edaaa00da2
	I0514 00:18:02.865799    4316 logs.go:276] 2 containers: [b2a1b31cd7de 91edaaa00da2]
	I0514 00:18:02.871795    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0514 00:18:02.895811    4316 command_runner.go:130] > b87239d1199a
	I0514 00:18:02.895811    4316 command_runner.go:130] > e96f94398d6d
	I0514 00:18:02.895811    4316 logs.go:276] 2 containers: [b87239d1199a e96f94398d6d]
	I0514 00:18:02.902449    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0514 00:18:02.921399    4316 command_runner.go:130] > 2b424a7cd98c
	I0514 00:18:02.921399    4316 command_runner.go:130] > b7d8d9a5e5ea
	I0514 00:18:02.922943    4316 logs.go:276] 2 containers: [2b424a7cd98c b7d8d9a5e5ea]
	I0514 00:18:02.923035    4316 logs.go:123] Gathering logs for container status ...
	I0514 00:18:02.923035    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0514 00:18:02.985754    4316 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0514 00:18:02.985876    4316 command_runner.go:130] > 3d0b2f0362eb4       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   8cb9b6d6d0915       busybox-fc5497c4f-xqj6w
	I0514 00:18:02.985908    4316 command_runner.go:130] > dcc5a109288b6       cbb01a7bd410d                                                                                         3 seconds ago        Running             coredns                   1                   1cccb5e8cee3b       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:02.985908    4316 command_runner.go:130] > bde84ba2d4ed7       6e38f40d628db                                                                                         24 seconds ago       Running             storage-provisioner       2                   468a0e2976ae4       storage-provisioner
	I0514 00:18:02.985969    4316 command_runner.go:130] > 2b424a7cd98c8       4950bb10b3f87                                                                                         36 seconds ago       Running             kindnet-cni               2                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:02.985999    4316 command_runner.go:130] > b7d8d9a5e5eaf       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:02.986049    4316 command_runner.go:130] > b142687b621f1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   468a0e2976ae4       storage-provisioner
	I0514 00:18:02.986082    4316 command_runner.go:130] > b2a1b31cd7dee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   a8ac60a565998       kube-proxy-zhcz6
	I0514 00:18:02.986082    4316 command_runner.go:130] > 08450c853590d       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   419648c0d4053       etcd-multinode-101100
	I0514 00:18:02.986182    4316 command_runner.go:130] > da9e6534cd87d       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   509b8407e0955       kube-apiserver-multinode-101100
	I0514 00:18:02.986182    4316 command_runner.go:130] > d3581c1c570cf       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   ddcaadef980ac       kube-scheduler-multinode-101100
	I0514 00:18:02.986219    4316 command_runner.go:130] > b87239d1199ab       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   659643d47b9ae       kube-controller-manager-multinode-101100
	I0514 00:18:02.986259    4316 command_runner.go:130] > 57dea5416eb67       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago       Exited              busybox                   0                   76d1b8ce19aba       busybox-fc5497c4f-xqj6w
	I0514 00:18:02.986259    4316 command_runner.go:130] > 76c5ab7859eff       cbb01a7bd410d                                                                                         21 minutes ago       Exited              coredns                   0                   8bb49b28c842a       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:02.986295    4316 command_runner.go:130] > 91edaaa00da23       a0bf559e280cf                                                                                         21 minutes ago       Exited              kube-proxy                0                   9bd694480978f       kube-proxy-zhcz6
	I0514 00:18:02.986335    4316 command_runner.go:130] > e96f94398d6dd       c7aad43836fa5                                                                                         22 minutes ago       Exited              kube-controller-manager   0                   da9268fd6556b       kube-controller-manager-multinode-101100
	I0514 00:18:02.986378    4316 command_runner.go:130] > 964887fc5d362       259c8277fcbbc                                                                                         22 minutes ago       Exited              kube-scheduler            0                   fcb3b27edcd2a       kube-scheduler-multinode-101100
	I0514 00:18:02.988807    4316 logs.go:123] Gathering logs for coredns [76c5ab7859ef] ...
	I0514 00:18:02.988879    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5ab7859ef"
	I0514 00:18:03.013508    4316 command_runner.go:130] > .:53
	I0514 00:18:03.013539    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:03.013539    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:03.013539    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:03.013620    4316 command_runner.go:130] > [INFO] 127.0.0.1:57161 - 45698 "HINFO IN 8990392176501838712.5889638972791529478. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051692136s
	I0514 00:18:03.013620    4316 command_runner.go:130] > [INFO] 10.244.1.2:55099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211505s
	I0514 00:18:03.013620    4316 command_runner.go:130] > [INFO] 10.244.1.2:55878 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185519855s
	I0514 00:18:03.013694    4316 command_runner.go:130] > [INFO] 10.244.1.2:33619 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15684109s
	I0514 00:18:03.013694    4316 command_runner.go:130] > [INFO] 10.244.1.2:49440 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.197645067s
	I0514 00:18:03.013694    4316 command_runner.go:130] > [INFO] 10.244.0.3:50960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430608s
	I0514 00:18:03.013694    4316 command_runner.go:130] > [INFO] 10.244.0.3:46839 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000167103s
	I0514 00:18:03.013694    4316 command_runner.go:130] > [INFO] 10.244.0.3:55330 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000155803s
	I0514 00:18:03.013776    4316 command_runner.go:130] > [INFO] 10.244.0.3:50874 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000131802s
	I0514 00:18:03.013776    4316 command_runner.go:130] > [INFO] 10.244.1.2:53724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096802s
	I0514 00:18:03.013847    4316 command_runner.go:130] > [INFO] 10.244.1.2:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.042707366s
	I0514 00:18:03.013847    4316 command_runner.go:130] > [INFO] 10.244.1.2:54429 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000269706s
	I0514 00:18:03.013847    4316 command_runner.go:130] > [INFO] 10.244.1.2:48558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000262605s
	I0514 00:18:03.013847    4316 command_runner.go:130] > [INFO] 10.244.1.2:46986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023487677s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:60460 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174903s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:60672 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204304s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:36311 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110402s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:43910 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301006s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:52495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000145803s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:46357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066702s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:41390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062301s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:35739 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084301s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:44800 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163303s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:57631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068702s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:50842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135702s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:41210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204604s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:57858 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073801s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:48782 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152303s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.1.2:36081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121002s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:46909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115002s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:36030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220205s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:56187 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059401s
	I0514 00:18:03.013955    4316 command_runner.go:130] > [INFO] 10.244.0.3:51500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099802s
	I0514 00:18:03.014495    4316 command_runner.go:130] > [INFO] 10.244.1.2:57247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147903s
	I0514 00:18:03.014495    4316 command_runner.go:130] > [INFO] 10.244.1.2:46132 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170203s
	I0514 00:18:03.014552    4316 command_runner.go:130] > [INFO] 10.244.1.2:57206 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000452309s
	I0514 00:18:03.014552    4316 command_runner.go:130] > [INFO] 10.244.1.2:44795 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000146203s
	I0514 00:18:03.014588    4316 command_runner.go:130] > [INFO] 10.244.0.3:33385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082102s
	I0514 00:18:03.014649    4316 command_runner.go:130] > [INFO] 10.244.0.3:56742 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173704s
	I0514 00:18:03.014649    4316 command_runner.go:130] > [INFO] 10.244.0.3:46927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185904s
	I0514 00:18:03.014716    4316 command_runner.go:130] > [INFO] 10.244.0.3:42956 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000054801s
	I0514 00:18:03.014758    4316 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0514 00:18:03.014758    4316 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0514 00:18:03.018567    4316 logs.go:123] Gathering logs for kube-scheduler [d3581c1c570c] ...
	I0514 00:18:03.018567    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3581c1c570c"
	I0514 00:18:03.040884    4316 command_runner.go:130] ! I0514 00:16:52.716401       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:03.040884    4316 command_runner.go:130] ! W0514 00:16:54.858727       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:03.040884    4316 command_runner.go:130] ! W0514 00:16:54.858778       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.040884    4316 command_runner.go:130] ! W0514 00:16:54.858790       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:03.040884    4316 command_runner.go:130] ! W0514 00:16:54.858800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:03.040884    4316 command_runner.go:130] ! I0514 00:16:54.945438       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:03.040884    4316 command_runner.go:130] ! I0514 00:16:54.945867       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.041447    4316 command_runner.go:130] ! I0514 00:16:54.953986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:03.041447    4316 command_runner.go:130] ! I0514 00:16:54.957180       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:03.041479    4316 command_runner.go:130] ! I0514 00:16:54.957284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:03.041479    4316 command_runner.go:130] ! I0514 00:16:54.957493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:03.041479    4316 command_runner.go:130] ! I0514 00:16:55.058381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:03.043755    4316 logs.go:123] Gathering logs for kube-scheduler [964887fc5d36] ...
	I0514 00:18:03.043755    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964887fc5d36"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:04.693680       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.133341       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.133396       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.133407       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.133415       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.170291       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.170533       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.174536       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.174684       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.174703       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! I0513 23:56:06.174918       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.182722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.186053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.183583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.183698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.183781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.183835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.183868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.184039       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.186929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.186969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.187026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.188647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.188112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.188121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.188233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.188242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.189252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.189533       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.189643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.189773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.190106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.190324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.190538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.191036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.191581       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! E0513 23:56:06.192160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:03.068616    4316 command_runner.go:130] ! W0513 23:56:06.191626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.069820    4316 command_runner.go:130] ! E0513 23:56:06.192721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.069820    4316 command_runner.go:130] ! W0513 23:56:06.190821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:03.069865    4316 command_runner.go:130] ! E0513 23:56:06.193134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:03.069865    4316 command_runner.go:130] ! W0513 23:56:07.154218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:03.069926    4316 command_runner.go:130] ! E0513 23:56:07.155376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:03.069964    4316 command_runner.go:130] ! W0513 23:56:07.229548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:03.069964    4316 command_runner.go:130] ! E0513 23:56:07.229613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:03.070012    4316 command_runner.go:130] ! W0513 23:56:07.344429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070049    4316 command_runner.go:130] ! E0513 23:56:07.344853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070049    4316 command_runner.go:130] ! W0513 23:56:07.410556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:03.070049    4316 command_runner.go:130] ! E0513 23:56:07.410716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:03.070102    4316 command_runner.go:130] ! W0513 23:56:07.423084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:03.070136    4316 command_runner.go:130] ! E0513 23:56:07.423126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:03.070193    4316 command_runner.go:130] ! W0513 23:56:07.467897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:03.070243    4316 command_runner.go:130] ! E0513 23:56:07.467939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:03.070277    4316 command_runner.go:130] ! W0513 23:56:07.484903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:03.070315    4316 command_runner.go:130] ! E0513 23:56:07.485019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:03.070315    4316 command_runner.go:130] ! W0513 23:56:07.545758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:03.070379    4316 command_runner.go:130] ! E0513 23:56:07.546087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.573884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.573980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.633780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.633901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.680821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.680938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.704130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.704357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.736914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.737079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:03.070405    4316 command_runner.go:130] ! W0513 23:56:07.754367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0513 23:56:07.754798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:03.070405    4316 command_runner.go:130] ! I0513 23:56:09.676327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:03.070405    4316 command_runner.go:130] ! E0514 00:14:35.689344       1 run.go:74] "command failed" err="finished without leader elect"
	I0514 00:18:03.079025    4316 logs.go:123] Gathering logs for kindnet [2b424a7cd98c] ...
	I0514 00:18:03.079025    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b424a7cd98c"
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:28.349800       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:28.349935       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:03.104051    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:28.441282       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:28.441413       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:28.441441       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045047       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045110       1 main.go:227] handling current node
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045545       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045580       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045839       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.109.58 Flags: [] Table: 0} 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045983       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.045993       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:29.046039       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.102.231 Flags: [] Table: 0} 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.055904       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.056127       1 main.go:227] handling current node
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.056141       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.056155       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.056412       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:39.056502       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062369       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062453       1 main.go:227] handling current node
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062465       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062483       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062816       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:49.062843       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075229       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075506       1 main.go:227] handling current node
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075588       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075650       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075827       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:03.104051    4316 command_runner.go:130] ! I0514 00:17:59.075835       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:03.106777    4316 logs.go:123] Gathering logs for Docker ...
	I0514 00:18:03.106854    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0514 00:18:03.138012    4316 command_runner.go:130] > May 14 00:15:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:03.138075    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:03.138075    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:03.138118    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:03.138118    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:03.138118    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:03.138118    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:03.138224    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138262    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:03.138262    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138318    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:03.138318    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:03.138387    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349024460Z" level=info msg="Starting up"
	I0514 00:18:03.138414    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349886331Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:03.138940    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.351031392Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0514 00:18:03.138980    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.380428255Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:03.139038    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407060046Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:03.139038    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407104860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:03.139076    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407157277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:03.139162    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407182685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139208    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408093872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.139246    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408200005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139290    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408421875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.139327    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408522107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139373    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408552116Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:03.139411    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408565820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139455    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409126597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139493    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409855027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139574    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412841968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.139617    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412982412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.139654    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413109352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.139701    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413195779Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:03.139738    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414192994Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:03.139782    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414303628Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:03.139819    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414321234Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:03.139864    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420644226Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:03.139902    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420793973Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:03.139902    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420815380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:03.139947    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420835086Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:03.139979    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420849391Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:03.140017    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421006640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:03.140048    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421303834Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:03.140048    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421395163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:03.140086    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421479890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:03.140120    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421494994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:03.140190    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421507198Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140227    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421523703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140273    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421540509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140313    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421554613Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140359    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421571518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140359    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421584022Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140396    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421594526Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140479    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421604629Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.140527    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421626336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140565    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421639040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140609    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421651344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140609    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421662947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140646    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421673350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140729    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421684554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140729    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421695257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140813    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421705961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140813    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421717564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140867    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421730268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140906    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421774782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140906    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421787286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140944    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421797990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.140944    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421811094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:03.140983    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421828299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.141022    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421838703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.141022    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421849206Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:03.141060    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421898721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:03.141093    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421926330Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:03.141132    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421987549Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:03.141171    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422004755Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:03.141208    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422070276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.141208    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422106987Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:03.141247    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422118891Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:03.141247    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422453196Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:03.141284    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422571233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:03.141284    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422619148Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:03.141318    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422687970Z" level=info msg="containerd successfully booted in 0.044863s"
	I0514 00:18:03.141318    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.404653025Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:03.141354    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.578701970Z" level=info msg="Loading containers: start."
	I0514 00:18:03.141387    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.027152626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:03.141387    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.105905244Z" level=info msg="Loading containers: done."
	I0514 00:18:03.141424    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.135340666Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:03.141457    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.136139953Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:03.141457    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.185948604Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:03.141494    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.186071317Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:03.141494    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:03.141527    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 systemd[1]: Stopping Docker Application Container Engine...
	I0514 00:18:03.141527    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.988898314Z" level=info msg="Processing signal 'terminated'"
	I0514 00:18:03.141564    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.989838579Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0514 00:18:03.141564    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990583130Z" level=info msg="Daemon shutdown complete"
	I0514 00:18:03.141602    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990661536Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0514 00:18:03.141640    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990696238Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0514 00:18:03.141640    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: docker.service: Deactivated successfully.
	I0514 00:18:03.141678    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: Stopped Docker Application Container Engine.
	I0514 00:18:03.141678    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:03.141678    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.059729298Z" level=info msg="Starting up"
	I0514 00:18:03.141716    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.060541955Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:03.141749    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.061850245Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1055
	I0514 00:18:03.141749    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.092613476Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:03.141786    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115368453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:03.141818    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115403155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:03.141818    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115435257Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:03.141855    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115450359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.141887    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115473760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.141924    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115486261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.141924    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115635771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.141962    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115738478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.141999    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115756280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:03.141999    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115766280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.142038    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115789882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.142074    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.116031099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.142107    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119790059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.142144    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119888566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:03.142144    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120181886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:03.142176    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120287794Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:03.142213    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120385900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:03.142246    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120406702Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:03.142282    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120419603Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:03.142317    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120713023Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:03.142354    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120746825Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:03.142354    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120760126Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:03.142386    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120773227Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:03.142423    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120785328Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:03.142423    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120826831Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:03.142456    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120999543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:03.142493    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121054147Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:03.142493    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121092049Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:03.142531    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121102050Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:03.142568    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121115951Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142568    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121126152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142602    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121135052Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142631    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121145153Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142656    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121156354Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142707    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121165854Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142731    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121175255Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142780    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121184656Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:03.142780    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121204657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142815    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121216358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142815    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121225759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142862    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121235159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121243960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121254361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121263161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121275762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121287763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121299564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121364668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121378369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121388070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121400871Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121421772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121432873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121442174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121474076Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121485477Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121493977Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121504178Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121548581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121558382Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121570783Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121732894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121765696Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121795498Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121808099Z" level=info msg="containerd successfully booted in 0.031442s"
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.110784113Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:03.142886    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.142577516Z" level=info msg="Loading containers: start."
	I0514 00:18:03.143418    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.405628939Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:03.143418    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.480865351Z" level=info msg="Loading containers: done."
	I0514 00:18:03.143458    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503621028Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:03.143493    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503703734Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:03.143493    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545253312Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545312016Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Loaded network plugin cni"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start cri-dockerd grpc backend"
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xqj6w_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2\""
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4kmx4_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697\""
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717439407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717535614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717551915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.718214261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720663031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720923549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721017455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721295774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783128658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783344773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.143524    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783450280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144047    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783657895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144085    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816093342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144085    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816151946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144120    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816166547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816251853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ddcaadef980aca40a7740fe7c59949c3cb803d9fb441eca155b02162f3422bb8/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/659643d47b9ae231a8b97d9871cab6dfac5f6d06e647c919d14170832ee47683/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.258935521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.259980593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260187008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260361520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272553064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272771779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272798781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272907589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314782590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314905098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314946601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.315263523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.385829312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386016625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386135333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386495758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:55Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0514 00:18:03.144152    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444453862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144676    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444531867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444549969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444647976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.461909471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462106685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462142187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462265196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492511091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492965923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493135035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493390352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8ac60a565998ca52581e38272f2fcdb5f7038023f93d728cd74f5b89f5593ed/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/468a0e2976ae45a571a99afabfcd1329c76873e973179fe56cc9ef46e2533698/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849392115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849539826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849623331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849861048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857219658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857468675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857687390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.858016113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5233e076edceb93931d756579982e556959dfd31508760da215a8407dca14e56/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218178264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218325574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218348976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.144707    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218459383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145229    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:17.430189771Z" level=info msg="ignoring event" container=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:03.145229    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431460316Z" level=info msg="shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:03.145229    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431869631Z" level=warning msg="cleaning up after shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:03.145229    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.432007736Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:03.145407    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:27.281698284Z" level=info msg="ignoring event" container=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:03.145455    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.282877145Z" level=info msg="shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:03.145455    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283000451Z" level=warning msg="cleaning up after shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:03.145488    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283015352Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:03.145519    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.098999177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145590    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099271791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145590    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099326694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145632    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099641511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145662    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.092603581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145704    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093732951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145704    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093768053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145734    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.095427255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145807    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235051362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145807    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235156269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145848    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235169170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145879    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235258576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145920    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235645702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145920    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235713507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145951    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235730808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235828014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743900500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743970305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.744406335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.745139484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808545660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808756974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808962988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.809189903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:03.145992    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:03.174078    4316 logs.go:123] Gathering logs for kubelet ...
	I0514 00:18:03.174078    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0514 00:18:03.194098    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:03.194866    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507609    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:03.194866    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507660    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.194866    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.508230    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:03.194982    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: E0514 00:16:46.508906    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:03.194982    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229791    1441 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229941    1441 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.230764    1441 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: E0514 00:16:47.231303    1441 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717000    1520 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717452    1520 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717850    1520 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.719747    1520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.734764    1520 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754342    1520 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0514 00:18:03.195044    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754443    1520 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0514 00:18:03.195578    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755707    1520 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0514 00:18:03.195680    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755788    1520 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-101100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0514 00:18:03.195860    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756671    1520 topology_manager.go:138] "Creating topology manager with none policy"
	I0514 00:18:03.195927    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756747    1520 container_manager_linux.go:301] "Creating device plugin manager"
	I0514 00:18:03.195964    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.757344    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:03.195964    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.758885    1520 kubelet.go:400] "Attempting to sync node with API server"
	I0514 00:18:03.196026    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759591    1520 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0514 00:18:03.196063    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759727    1520 kubelet.go:312] "Adding apiserver pod source"
	I0514 00:18:03.196063    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.760630    1520 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0514 00:18:03.196164    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.765370    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.196224    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.765512    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.196322    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.767039    1520 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0514 00:18:03.196358    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.771297    1520 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0514 00:18:03.196419    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.771834    1520 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0514 00:18:03.196460    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.773545    1520 server.go:1264] "Started kubelet"
	I0514 00:18:03.196460    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.773829    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.196558    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.774013    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.196757    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.780360    1520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.102.122:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-101100.17cf32c62bf0274b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-101100,UID:multinode-101100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-101100,},FirstTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,LastTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-1
01100,}"
	I0514 00:18:03.196844    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.781297    1520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0514 00:18:03.196844    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.786484    1520 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0514 00:18:03.196844    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.787784    1520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0514 00:18:03.196940    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.792005    1520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0514 00:18:03.196940    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.800317    1520 server.go:455] "Adding debug handlers to kubelet server"
	I0514 00:18:03.196940    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805202    1520 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0514 00:18:03.197042    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805290    1520 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0514 00:18:03.197042    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812186    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="200ms"
	I0514 00:18:03.197176    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.812333    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.197239    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812369    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.197281    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816781    1520 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0514 00:18:03.197281    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816881    1520 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0514 00:18:03.197374    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816892    1520 factory.go:221] Registration of the systemd container factory successfully
	I0514 00:18:03.197374    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849206    1520 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0514 00:18:03.197374    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849426    1520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0514 00:18:03.197479    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849585    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:03.197479    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850764    1520 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0514 00:18:03.197689    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850799    1520 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0514 00:18:03.197689    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850826    1520 policy_none.go:49] "None policy: Start"
	I0514 00:18:03.197794    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.855604    1520 reconciler.go:26] "Reconciler: start to sync state"
	I0514 00:18:03.197794    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884024    1520 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0514 00:18:03.197794    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884165    1520 state_mem.go:35] "Initializing new in-memory state store"
	I0514 00:18:03.197888    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.886215    1520 state_mem.go:75] "Updated machine memory state"
	I0514 00:18:03.197888    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888657    1520 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0514 00:18:03.197982    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888839    1520 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0514 00:18:03.197982    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.891306    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0514 00:18:03.198075    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.897961    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0514 00:18:03.198075    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898040    1520 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0514 00:18:03.198075    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898088    1520 kubelet.go:2337] "Starting kubelet main sync loop"
	I0514 00:18:03.198168    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.898127    1520 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0514 00:18:03.198168    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898551    1520 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0514 00:18:03.198261    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.899218    1520 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-101100\" not found"
	I0514 00:18:03.198261    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.900215    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.198357    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.900324    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.198504    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.907443    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:03.198583    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.909152    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:03.198639    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.912132    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:03.198678    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:03.198711    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:03.198711    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:03.198711    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:03.199339    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999139    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7c140951f4f8270da243f55135e9f108f3cdf5ef11a4e990e06822ace5adbd"
	I0514 00:18:03.199432    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999762    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d7537422a83c9a57ab3bed978e87441e2725a75ebc91f5cad3319d11d4ea18"
	I0514 00:18:03.199432    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999846    1520 topology_manager.go:215] "Topology Admit Handler" podUID="378d61cf78af695f1df41e321907a84d" podNamespace="kube-system" podName="kube-apiserver-multinode-101100"
	I0514 00:18:03.199432    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.000880    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5393de2704b2efef461d22fa52aa93c8" podNamespace="kube-system" podName="kube-controller-manager-multinode-101100"
	I0514 00:18:03.199432    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.002201    1520 topology_manager.go:215] "Topology Admit Handler" podUID="8083abd658221f47cabf81a00c4ca98e" podNamespace="kube-system" podName="kube-scheduler-multinode-101100"
	I0514 00:18:03.199432    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.004707    1520 topology_manager.go:215] "Topology Admit Handler" podUID="62d8afc7714e8ab65bff9675d120bb67" podNamespace="kube-system" podName="etcd-multinode-101100"
	I0514 00:18:03.199694    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007687    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb3b27edcd2a44b67fad4a74f438a62eec78b20422f6f952396053574dfb97e"
	I0514 00:18:03.199694    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007796    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da9268fd6556bae4d0109c5065588160bcf737c35e1e5df738d31786425c22ff"
	I0514 00:18:03.199781    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007891    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd694480978f356b61313108a6ff716a8d5f6e854fea1e4aa89a76a68d049f0"
	I0514 00:18:03.199781    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007938    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287e744a4dc2e511f4e40696c7d3b4193896c0c40a5bb527e569d1d3ec2cb908"
	I0514 00:18:03.199781    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.013966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad0550a5dabf16106fc2956251a65bccdc32f3f3be1f27246f675964fd548a1f"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.014759    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="400ms"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.031437    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.048649    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074775    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-ca-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074859    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074906    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-k8s-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074943    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.200083    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074981    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-certs\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:03.200614    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075015    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-data\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:03.200733    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075045    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-k8s-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:03.200779    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075248    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-ca-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.200907    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075285    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-flexvolume-dir\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.201054    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075316    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-kubeconfig\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.201117    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075345    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8083abd658221f47cabf81a00c4ca98e-kubeconfig\") pod \"kube-scheduler-multinode-101100\" (UID: \"8083abd658221f47cabf81a00c4ca98e\") " pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:03.201218    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.111262    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:03.201255    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.112979    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:03.201297    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.416229    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="800ms"
	I0514 00:18:03.201297    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.515338    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:03.201340    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.516940    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:03.201423    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: W0514 00:16:50.730920    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201464    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.730993    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201507    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.074200    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201549    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.074270    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201549    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.076835    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b"
	I0514 00:18:03.201592    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.081775    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201654    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.081938    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201716    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.108133    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.218458    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="1.6s"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.318715    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.319804    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.367337    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.367409    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:52 multinode-101100 kubelet[1520]: I0514 00:16:52.921237    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086028    1520 kubelet_node_status.go:112] "Node was previously registered" node="multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.086698    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-101100\" already exists" pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086743    1520 kubelet_node_status.go:76] "Successfully registered node" node="multinode-101100"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.088971    1520 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.090614    1520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.091996    1520 setters.go:580] "Node became not ready" node="multinode-101100" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-14T00:16:55Z","lastTransitionTime":"2024-05-14T00:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.783435    1520 apiserver.go:52] "Watching apiserver"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788503    1520 topology_manager.go:215] "Topology Admit Handler" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4kmx4"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788795    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0" podNamespace="kube-system" podName="kindnet-9q2tv"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788932    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a9a488af-41ba-47f3-87b0-5a2f062afad6" podNamespace="kube-system" podName="kube-proxy-zhcz6"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789028    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247" podNamespace="kube-system" podName="storage-provisioner"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789184    1520 topology_manager.go:215] "Topology Admit Handler" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae" podNamespace="default" podName="busybox-fc5497c4f-xqj6w"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.789553    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789850    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-101100" podUID="1d9c79a4-1e4a-46fb-b3e8-02a4775f40af"
	I0514 00:18:03.201738    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.790329    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-101100" podUID="cd31d030-75f8-4abb-bcad-34031cec7aa6"
	I0514 00:18:03.202264    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.794088    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.202304    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.798934    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-multinode-101100\" already exists" pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:03.202304    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.809466    1520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0514 00:18:03.202349    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.835196    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-101100"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857783    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-cni-cfg\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857845    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-xtables-lock\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857866    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-xtables-lock\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857954    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-lib-modules\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858020    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a92f04b8-a93f-42d8-81d7-d4da6bf2e247-tmp\") pod \"storage-provisioner\" (UID: \"a92f04b8-a93f-42d8-81d7-d4da6bf2e247\") " pod="kube-system/storage-provisioner"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858051    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-lib-modules\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859176    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859325    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.359260421 +0000 UTC m=+6.710289036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.873841    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.907826    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d9b35578220c9e99f77722d9aa294f" path="/var/lib/kubelet/pods/03d9b35578220c9e99f77722d9aa294f/volumes"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.910490    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af4b764a5249ff25d3c1c709387c273" path="/var/lib/kubelet/pods/1af4b764a5249ff25d3c1c709387c273/volumes"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917375    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917415    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917466    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.417450852 +0000 UTC m=+6.768479567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.964380    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-101100" podStartSLOduration=0.9643304 podStartE2EDuration="964.3304ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.964174289 +0000 UTC m=+6.315203004" watchObservedRunningTime="2024-05-14 00:16:55.9643304 +0000 UTC m=+6.315359015"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.985118    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-101100" podStartSLOduration=0.985100539 podStartE2EDuration="985.100539ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.984806519 +0000 UTC m=+6.335835134" watchObservedRunningTime="2024-05-14 00:16:55.985100539 +0000 UTC m=+6.336129154"
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.362973    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.202379    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.363041    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.363025821 +0000 UTC m=+7.714054436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.202904    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463836    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.202942    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463868    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.202990    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463923    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.46390701 +0000 UTC m=+7.814935725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.377986    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.378101    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.378049439 +0000 UTC m=+9.729078054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478290    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478356    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478448    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.478431994 +0000 UTC m=+9.829460709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899119    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899678    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.394980    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.395173    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.39515828 +0000 UTC m=+13.746186895 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496260    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496313    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496438    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.496350091 +0000 UTC m=+13.847378806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.891391    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.901591    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.914896    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203023    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.898894    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203551    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.899345    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203551    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445887    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445965    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.44595071 +0000 UTC m=+21.796979425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547258    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547292    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547346    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.547331033 +0000 UTC m=+21.898359648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.899515    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.900290    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:04 multinode-101100 kubelet[1520]: E0514 00:17:04.893282    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900260    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899212    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899658    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.895008    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899381    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899884    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508629    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508833    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.508813455 +0000 UTC m=+37.859842170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609334    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609455    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609579    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.609562102 +0000 UTC m=+37.960590817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899431    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899749    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.898578    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.899676    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:14 multinode-101100 kubelet[1520]: E0514 00:17:14.897029    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.899665    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.900476    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.766386    1520 scope.go:117] "RemoveContainer" containerID="9c4eb727cedb65853cc3a94fdcc3e267ed41cd9cb15ef1cc1bb84f6f2278c9c4"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.767364    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.767901    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-9q2tv_kube-system(5b3ee167-f21f-46b3-bace-03a7233717e0)\"" pod="kube-system/kindnet-9q2tv" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.898891    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.899300    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.898102    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.203612    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899045    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.204698    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899315    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.204740    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900488    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.204740    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900677    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.204798    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899091    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.204798    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899625    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.204846    4316 command_runner.go:130] > May 14 00:17:24 multinode-101100 kubelet[1520]: E0514 00:17:24.899382    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.204893    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900463    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.204924    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900948    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.204962    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550622    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:03.205060    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550839    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.550821042 +0000 UTC m=+69.901849657 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:03.205099    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651942    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.205128    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651988    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.205195    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.652038    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.652024653 +0000 UTC m=+70.003053368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:03.205233    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.900302    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.205263    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.901190    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.205301    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.901408    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:03.205330    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.913749    1520 scope.go:117] "RemoveContainer" containerID="e6ee22ee5c1b88cb0b1190c646094aefe229bfbd4486f007cde2b36da39ca886"
	I0514 00:18:03.205330    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.914050    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:03.205369    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.914651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a92f04b8-a93f-42d8-81d7-d4da6bf2e247)\"" pod="kube-system/storage-provisioner" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247"
	I0514 00:18:03.205398    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.898652    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.205472    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.899154    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.205472    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.900744    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.900407    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.902295    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.898560    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.899627    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:39 multinode-101100 kubelet[1520]: I0514 00:17:39.899892    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.888753    1520 scope.go:117] "RemoveContainer" containerID="eda79d47d28ffbc726bec7eaad072eeebb31ec439ed9bbe9fd544b9913b8f3ea"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: E0514 00:17:49.924547    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.932695    1520 scope.go:117] "RemoveContainer" containerID="06f1a683cad8348fc4f8e339f226bbda12c4e8c1025c7acb52e2792253dd3008"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.478966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d"
	I0514 00:18:03.205550    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.543407    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d"
	I0514 00:18:03.242705    4316 logs.go:123] Gathering logs for describe nodes ...
	I0514 00:18:03.242705    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0514 00:18:03.495448    4316 command_runner.go:130] > Name:               multinode-101100
	I0514 00:18:03.495448    4316 command_runner.go:130] > Roles:              control-plane
	I0514 00:18:03.495448    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_56_10_0700
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0514 00:18:03.495448    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:03.495448    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:03.495448    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:56:06 +0000
	I0514 00:18:03.495448    4316 command_runner.go:130] > Taints:             <none>
	I0514 00:18:03.495448    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:03.495448    4316 command_runner.go:130] > Lease:
	I0514 00:18:03.495448    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100
	I0514 00:18:03.495448    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:03.495448    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:17:56 +0000
	I0514 00:18:03.495448    4316 command_runner.go:130] > Conditions:
	I0514 00:18:03.495448    4316 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0514 00:18:03.495448    4316 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0514 00:18:03.495448    4316 command_runner.go:130] >   MemoryPressure   False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0514 00:18:03.495448    4316 command_runner.go:130] >   DiskPressure     False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0514 00:18:03.495448    4316 command_runner.go:130] >   PIDPressure      False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0514 00:18:03.495448    4316 command_runner.go:130] >   Ready            True    Tue, 14 May 2024 00:17:35 +0000   Tue, 14 May 2024 00:17:35 +0000   KubeletReady                 kubelet is posting ready status
	I0514 00:18:03.495448    4316 command_runner.go:130] > Addresses:
	I0514 00:18:03.495448    4316 command_runner.go:130] >   InternalIP:  172.23.102.122
	I0514 00:18:03.495448    4316 command_runner.go:130] >   Hostname:    multinode-101100
	I0514 00:18:03.495448    4316 command_runner.go:130] > Capacity:
	I0514 00:18:03.495448    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.495448    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.495448    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.495448    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.495448    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.495448    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:03.496459    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.496459    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.496459    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.496459    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.496459    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.496628    4316 command_runner.go:130] > System Info:
	I0514 00:18:03.496628    4316 command_runner.go:130] >   Machine ID:                 5110a322e7104904905e303a94b950b6
	I0514 00:18:03.496628    4316 command_runner.go:130] >   System UUID:                9b23fe4d-6d34-444b-8185-a84d51d23610
	I0514 00:18:03.496628    4316 command_runner.go:130] >   Boot ID:                    2e73d191-2dbe-4055-a17d-cff8a9e53a15
	I0514 00:18:03.496799    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:03.496799    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:03.496799    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:03.496799    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:03.496799    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:03.496956    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:03.496956    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:03.497068    4316 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0514 00:18:03.497068    4316 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0514 00:18:03.497134    4316 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0514 00:18:03.497134    4316 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:03.497134    4316 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0514 00:18:03.497276    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-xqj6w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:03.497389    4316 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4kmx4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	I0514 00:18:03.497448    4316 command_runner.go:130] >   kube-system                 etcd-multinode-101100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         68s
	I0514 00:18:03.497522    4316 command_runner.go:130] >   kube-system                 kindnet-9q2tv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0514 00:18:03.497584    4316 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-101100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	I0514 00:18:03.497619    4316 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-101100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:03.497764    4316 command_runner.go:130] >   kube-system                 kube-proxy-zhcz6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:03.497764    4316 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-101100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:03.497764    4316 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:03.497902    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:03.497902    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:03.497902    4316 command_runner.go:130] >   Resource           Requests     Limits
	I0514 00:18:03.498069    4316 command_runner.go:130] >   --------           --------     ------
	I0514 00:18:03.498069    4316 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0514 00:18:03.498069    4316 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0514 00:18:03.498486    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:03.498519    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:03.498615    4316 command_runner.go:130] > Events:
	I0514 00:18:03.498615    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:03.498787    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:03.498830    4316 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0514 00:18:03.498830    4316 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0514 00:18:03.498938    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:03.499017    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.499104    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:03.499149    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.499266    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m                kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:03.499355    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.499389    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m                kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.499516    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m                kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:03.499516    4316 command_runner.go:130] >   Normal  Starting                 21m                kubelet          Starting kubelet.
	I0514 00:18:03.499656    4316 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:03.499691    4316 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-101100 status is now: NodeReady
	I0514 00:18:03.499777    4316 command_runner.go:130] >   Normal  Starting                 74s                kubelet          Starting kubelet.
	I0514 00:18:03.499857    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.499955    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:03.500042    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.500042    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     73s (x7 over 74s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:03.500174    4316 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:03.500174    4316 command_runner.go:130] > Name:               multinode-101100-m02
	I0514 00:18:03.500174    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:03.500325    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:03.500361    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:03.500458    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:03.500496    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m02
	I0514 00:18:03.500496    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:03.500575    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:03.500670    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:03.500670    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:03.500767    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_59_02_0700
	I0514 00:18:03.500808    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:03.500906    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:03.500906    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:03.500948    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:03.501088    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:59:02 +0000
	I0514 00:18:03.501088    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:03.501187    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:03.501229    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:03.501297    4316 command_runner.go:130] > Lease:
	I0514 00:18:03.501297    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m02
	I0514 00:18:03.501297    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:03.501437    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:52 +0000
	I0514 00:18:03.501437    4316 command_runner.go:130] > Conditions:
	I0514 00:18:03.501535    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:03.501578    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:03.501672    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.501714    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.501851    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.501943    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.501985    4316 command_runner.go:130] > Addresses:
	I0514 00:18:03.501985    4316 command_runner.go:130] >   InternalIP:  172.23.109.58
	I0514 00:18:03.501985    4316 command_runner.go:130] >   Hostname:    multinode-101100-m02
	I0514 00:18:03.502084    4316 command_runner.go:130] > Capacity:
	I0514 00:18:03.502126    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.502126    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.502126    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.502225    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.502225    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.502337    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:03.502337    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.502337    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.502435    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.502479    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.502577    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.502577    4316 command_runner.go:130] > System Info:
	I0514 00:18:03.502619    4316 command_runner.go:130] >   Machine ID:                 8d348bb1bbc048f4b99c681873b42d63
	I0514 00:18:03.502716    4316 command_runner.go:130] >   System UUID:                4330851b-5248-f245-9378-5fc25e670b55
	I0514 00:18:03.502759    4316 command_runner.go:130] >   Boot ID:                    9f102be6-1468-4570-8696-97e5ce51649a
	I0514 00:18:03.502759    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:03.502884    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:03.502963    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:03.502963    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:03.502963    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:03.503071    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:03.503071    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:03.503071    4316 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0514 00:18:03.503165    4316 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0514 00:18:03.503165    4316 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0514 00:18:03.503250    4316 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:03.503343    4316 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0514 00:18:03.503343    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-q7442    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:03.503430    4316 command_runner.go:130] >   kube-system                 kindnet-2lwsm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	I0514 00:18:03.503522    4316 command_runner.go:130] >   kube-system                 kube-proxy-b25hq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0514 00:18:03.503522    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:03.503609    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:03.503609    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:03.503609    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:03.503703    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:03.503801    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:03.503801    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:03.503801    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:03.503801    4316 command_runner.go:130] > Events:
	I0514 00:18:03.503903    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:03.503994    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:03.503994    4316 command_runner.go:130] >   Normal  Starting                 18m                kube-proxy       
	I0514 00:18:03.504087    4316 command_runner.go:130] >   Normal  RegisteredNode           19m                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:03.504183    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	I0514 00:18:03.504183    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.504276    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	I0514 00:18:03.504363    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.504363    4316 command_runner.go:130] >   Normal  NodeReady                18m                kubelet          Node multinode-101100-m02 status is now: NodeReady
	I0514 00:18:03.504455    4316 command_runner.go:130] >   Normal  NodeNotReady             3m31s              node-controller  Node multinode-101100-m02 status is now: NodeNotReady
	I0514 00:18:03.504539    4316 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:03.504629    4316 command_runner.go:130] > Name:               multinode-101100-m03
	I0514 00:18:03.504629    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:03.504728    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:03.504728    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:03.504804    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:03.504929    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m03
	I0514 00:18:03.505004    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:03.505004    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_14T00_12_45_0700
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:03.505087    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:03.505087    4316 command_runner.go:130] > CreationTimestamp:  Tue, 14 May 2024 00:12:44 +0000
	I0514 00:18:03.505087    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:03.505087    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:03.505087    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:03.505087    4316 command_runner.go:130] > Lease:
	I0514 00:18:03.505087    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m03
	I0514 00:18:03.505087    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:03.505087    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:36 +0000
	I0514 00:18:03.505087    4316 command_runner.go:130] > Conditions:
	I0514 00:18:03.505087    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:03.505087    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:03.505087    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.505087    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.505087    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.505087    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:03.505087    4316 command_runner.go:130] > Addresses:
	I0514 00:18:03.505087    4316 command_runner.go:130] >   InternalIP:  172.23.102.231
	I0514 00:18:03.505087    4316 command_runner.go:130] >   Hostname:    multinode-101100-m03
	I0514 00:18:03.505087    4316 command_runner.go:130] > Capacity:
	I0514 00:18:03.505087    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.505087    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.505087    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.505087    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.505087    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.505646    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:03.505646    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:03.505827    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:03.505827    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:03.505827    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:03.505827    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:03.505827    4316 command_runner.go:130] > System Info:
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Machine ID:                 11c3fac528de4278b1dafef49e54ea09
	I0514 00:18:03.505827    4316 command_runner.go:130] >   System UUID:                0ee228e5-87a6-0549-9a8d-1747b73431ee
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Boot ID:                    d5c1e04c-3081-4871-912e-a86507b8e24a
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:03.505827    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:03.505827    4316 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0514 00:18:03.505827    4316 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0514 00:18:03.505827    4316 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0514 00:18:03.505827    4316 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:03.506365    4316 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0514 00:18:03.506400    4316 command_runner.go:130] >   kube-system                 kindnet-tfbt8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	I0514 00:18:03.506400    4316 command_runner.go:130] >   kube-system                 kube-proxy-8zsgn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	I0514 00:18:03.506400    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:03.506400    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:03.506400    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:03.506400    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:03.506400    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:03.506400    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:03.506400    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:03.506400    4316 command_runner.go:130] > Events:
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0514 00:18:03.506400    4316 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Normal  Starting                 5m16s                  kube-proxy       
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Normal  Starting                 14m                    kube-proxy       
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:03.506400    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.506926    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeReady                14m                    kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  Starting                 5m19s                  kubelet          Starting kubelet.
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m19s (x2 over 5m19s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m19s (x2 over 5m19s)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m19s (x2 over 5m19s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  RegisteredNode           5m16s                  node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeReady                5m14s                  kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  NodeNotReady             3m46s                  node-controller  Node multinode-101100-m03 status is now: NodeNotReady
	I0514 00:18:03.506988    4316 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:03.518924    4316 logs.go:123] Gathering logs for etcd [08450c853590] ...
	I0514 00:18:03.518924    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08450c853590"
	I0514 00:18:03.551355    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.687231Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:03.551834    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.691397Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.102.122:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.102.122:2380","--initial-cluster=multinode-101100=https://172.23.102.122:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.102.122:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.102.122:2380","--name=multinode-101100","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0514 00:18:03.551834    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.692425Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0514 00:18:03.551834    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.693634Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:03.551910    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.693771Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.102.122:2380"]}
	I0514 00:18:03.551948    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.694117Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:03.551948    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.703219Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"]}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.704312Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-101100","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.7264Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.905879ms"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.748539Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.766395Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","commit-index":1898}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=()"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became follower at term 2"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.768086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6e4c15c3d0f3380f [peers: [], term: 2, commit: 1898, applied: 0, lastindex: 1898, lastterm: 2]"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.782157Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.786938Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1096}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.797876Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1653}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.80426Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.81216Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6e4c15c3d0f3380f","timeout":"7s"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.813213Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6e4c15c3d0f3380f"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.814234Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"6e4c15c3d0f3380f","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.815302Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816695Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816877Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=(7947751373170489359)"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817687Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","added-peer-id":"6e4c15c3d0f3380f","added-peer-peer-urls":["https://172.23.106.39:2380"]}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","cluster-version":"3.5"}
	I0514 00:18:03.551980    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.818648Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0514 00:18:03.552509    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.83299Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:03.552583    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.834951Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6e4c15c3d0f3380f","initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0514 00:18:03.552620    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835138Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0514 00:18:03.552620    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835469Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.102.122:2380"}
	I0514 00:18:03.552661    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835603Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.102.122:2380"}
	I0514 00:18:03.552661    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.468953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f is starting a new election at term 2"}
	I0514 00:18:03.552700    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became pre-candidate at term 2"}
	I0514 00:18:03.552700    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgPreVoteResp from 6e4c15c3d0f3380f at term 2"}
	I0514 00:18:03.552739    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became candidate at term 3"}
	I0514 00:18:03.552739    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgVoteResp from 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:03.552778    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became leader at term 3"}
	I0514 00:18:03.552819    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6e4c15c3d0f3380f elected leader 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:03.552819    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479025Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6e4c15c3d0f3380f","local-member-attributes":"{Name:multinode-101100 ClientURLs:[https://172.23.102.122:2379]}","request-path":"/0/members/6e4c15c3d0f3380f/attributes","cluster-id":"bb849d1df0b559d7","publish-timeout":"7s"}
	I0514 00:18:03.552857    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:03.552898    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:03.552898    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0514 00:18:03.552936    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0514 00:18:03.552936    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.102.122:2379"}
	I0514 00:18:03.552975    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0514 00:18:03.564444    4316 logs.go:123] Gathering logs for dmesg ...
	I0514 00:18:03.564444    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0514 00:18:03.586007    4316 command_runner.go:130] > [May14 00:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0514 00:18:03.586052    4316 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0514 00:18:03.586052    4316 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0514 00:18:03.586151    4316 command_runner.go:130] > [  +0.104207] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0514 00:18:03.586151    4316 command_runner.go:130] > [  +0.023601] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0514 00:18:03.586207    4316 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0514 00:18:03.586207    4316 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0514 00:18:03.586282    4316 command_runner.go:130] > [  +0.058832] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0514 00:18:03.586311    4316 command_runner.go:130] > [  +0.024495] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0514 00:18:03.586349    4316 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0514 00:18:03.586405    4316 command_runner.go:130] > [  +5.692465] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0514 00:18:03.586405    4316 command_runner.go:130] > [  +0.707713] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0514 00:18:03.586448    4316 command_runner.go:130] > [  +1.789899] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0514 00:18:03.586489    4316 command_runner.go:130] > [  +7.282690] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0514 00:18:03.586489    4316 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0514 00:18:03.586531    4316 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0514 00:18:03.586571    4316 command_runner.go:130] > [May14 00:16] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	I0514 00:18:03.586571    4316 command_runner.go:130] > [  +0.158382] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	I0514 00:18:03.586614    4316 command_runner.go:130] > [ +23.750429] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	I0514 00:18:03.586614    4316 command_runner.go:130] > [  +0.111929] kauditd_printk_skb: 73 callbacks suppressed
	I0514 00:18:03.586654    4316 command_runner.go:130] > [  +0.464883] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	I0514 00:18:03.586690    4316 command_runner.go:130] > [  +0.164872] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0514 00:18:03.586729    4316 command_runner.go:130] > [  +0.194348] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	I0514 00:18:03.586729    4316 command_runner.go:130] > [  +2.832176] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	I0514 00:18:03.586772    4316 command_runner.go:130] > [  +0.181315] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	I0514 00:18:03.586772    4316 command_runner.go:130] > [  +0.160798] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	I0514 00:18:03.586824    4316 command_runner.go:130] > [  +0.238904] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	I0514 00:18:03.586824    4316 command_runner.go:130] > [  +0.787359] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0514 00:18:03.586870    4316 command_runner.go:130] > [  +0.085936] kauditd_printk_skb: 205 callbacks suppressed
	I0514 00:18:03.586870    4316 command_runner.go:130] > [  +3.384697] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	I0514 00:18:03.586910    4316 command_runner.go:130] > [  +1.802132] kauditd_printk_skb: 64 callbacks suppressed
	I0514 00:18:03.586910    4316 command_runner.go:130] > [  +5.213940] kauditd_printk_skb: 10 callbacks suppressed
	I0514 00:18:03.586965    4316 command_runner.go:130] > [  +3.471694] systemd-fstab-generator[2315]: Ignoring "noauto" option for root device
	I0514 00:18:03.586965    4316 command_runner.go:130] > [May14 00:17] kauditd_printk_skb: 70 callbacks suppressed
	I0514 00:18:03.588869    4316 logs.go:123] Gathering logs for kube-proxy [b2a1b31cd7de] ...
	I0514 00:18:03.588869    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a1b31cd7de"
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.528613       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.562847       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.122"]
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.701301       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.701447       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.701476       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:03.617649    4316 command_runner.go:130] ! I0514 00:16:57.708219       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:03.618193    4316 command_runner.go:130] ! I0514 00:16:57.708800       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:03.618193    4316 command_runner.go:130] ! I0514 00:16:57.708841       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.618193    4316 command_runner.go:130] ! I0514 00:16:57.712422       1 config.go:192] "Starting service config controller"
	I0514 00:18:03.618272    4316 command_runner.go:130] ! I0514 00:16:57.712733       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:03.618322    4316 command_runner.go:130] ! I0514 00:16:57.712780       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:03.618370    4316 command_runner.go:130] ! I0514 00:16:57.712824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:03.618370    4316 command_runner.go:130] ! I0514 00:16:57.715339       1 config.go:319] "Starting node config controller"
	I0514 00:18:03.618428    4316 command_runner.go:130] ! I0514 00:16:57.717651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:03.618428    4316 command_runner.go:130] ! I0514 00:16:57.815732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:03.618428    4316 command_runner.go:130] ! I0514 00:16:57.815811       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:03.618500    4316 command_runner.go:130] ! I0514 00:16:57.818050       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:03.621258    4316 logs.go:123] Gathering logs for kube-controller-manager [b87239d1199a] ...
	I0514 00:18:03.621306    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87239d1199a"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.414723       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.798318       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.798456       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.802364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.802939       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.803159       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:52.803510       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:56.867503       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:56.868219       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:56.874269       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:56.878308       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:03.645040    4316 command_runner.go:130] ! I0514 00:16:56.878330       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:03.646082    4316 command_runner.go:130] ! I0514 00:16:56.878409       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:03.646082    4316 command_runner.go:130] ! I0514 00:16:56.878509       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:03.646082    4316 command_runner.go:130] ! I0514 00:16:56.878517       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:03.646082    4316 command_runner.go:130] ! I0514 00:16:56.882632       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:03.646203    4316 command_runner.go:130] ! I0514 00:16:56.882648       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:03.646203    4316 command_runner.go:130] ! I0514 00:16:56.882656       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:03.646203    4316 command_runner.go:130] ! I0514 00:16:56.883478       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:03.646203    4316 command_runner.go:130] ! I0514 00:16:56.883488       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:03.646203    4316 command_runner.go:130] ! I0514 00:16:56.883496       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:03.646322    4316 command_runner.go:130] ! I0514 00:16:56.885766       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:03.646322    4316 command_runner.go:130] ! I0514 00:16:56.888273       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:03.646322    4316 command_runner.go:130] ! I0514 00:16:56.888463       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:03.646411    4316 command_runner.go:130] ! I0514 00:16:56.889304       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:03.646411    4316 command_runner.go:130] ! I0514 00:16:56.890244       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:03.646411    4316 command_runner.go:130] ! I0514 00:16:56.890408       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:03.646411    4316 command_runner.go:130] ! I0514 00:16:56.893619       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.903162       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.903183       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.969340       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.982656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.982729       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:03.646508    4316 command_runner.go:130] ! I0514 00:16:56.983268       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:03.646644    4316 command_runner.go:130] ! I0514 00:16:56.983299       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:03.646644    4316 command_runner.go:130] ! I0514 00:16:56.983354       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:03.646732    4316 command_runner.go:130] ! I0514 00:16:56.983426       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:03.646732    4316 command_runner.go:130] ! I0514 00:16:56.983451       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:03.646732    4316 command_runner.go:130] ! W0514 00:16:56.983466       1 shared_informer.go:597] resyncPeriod 15h46m20.096782659s is smaller than resyncCheckPeriod 18h37m10.298700604s and the informer has already started. Changing it to 18h37m10.298700604s
	I0514 00:18:03.646732    4316 command_runner.go:130] ! I0514 00:16:56.983922       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:03.646822    4316 command_runner.go:130] ! I0514 00:16:56.984377       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:03.646822    4316 command_runner.go:130] ! I0514 00:16:56.984435       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:03.646822    4316 command_runner.go:130] ! I0514 00:16:56.984460       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:03.646822    4316 command_runner.go:130] ! I0514 00:16:56.984478       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:03.646908    4316 command_runner.go:130] ! I0514 00:16:56.984528       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:03.646943    4316 command_runner.go:130] ! I0514 00:16:56.984568       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.984736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.985288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.995607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.996188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.997004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.997141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.997174       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.997363       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:56.997373       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:57.003479       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:57.004086       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:57.004336       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:16:57.004348       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.031733       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.032143       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.032242       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.032648       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.034995       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.035109       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.035510       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.035544       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.035551       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.038183       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:03.646970    4316 command_runner.go:130] ! I0514 00:17:07.038394       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.039212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.040784       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.041050       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.041194       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.043909       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:03.647513    4316 command_runner.go:130] ! I0514 00:17:07.044044       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:03.647823    4316 command_runner.go:130] ! I0514 00:17:07.044106       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:03.647823    4316 command_runner.go:130] ! I0514 00:17:07.059101       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:03.647823    4316 command_runner.go:130] ! I0514 00:17:07.059352       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:03.647924    4316 command_runner.go:130] ! I0514 00:17:07.059503       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:03.647924    4316 command_runner.go:130] ! I0514 00:17:07.062189       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:03.647924    4316 command_runner.go:130] ! I0514 00:17:07.062615       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:03.647924    4316 command_runner.go:130] ! I0514 00:17:07.062641       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:03.647924    4316 command_runner.go:130] ! I0514 00:17:07.070971       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:03.647991    4316 command_runner.go:130] ! I0514 00:17:07.071021       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:03.647991    4316 command_runner.go:130] ! I0514 00:17:07.071151       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:03.648018    4316 command_runner.go:130] ! I0514 00:17:07.071293       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:03.648018    4316 command_runner.go:130] ! I0514 00:17:07.071328       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:03.648018    4316 command_runner.go:130] ! I0514 00:17:07.071388       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:03.648018    4316 command_runner.go:130] ! I0514 00:17:07.083342       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:03.648018    4316 command_runner.go:130] ! I0514 00:17:07.084321       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:03.648097    4316 command_runner.go:130] ! I0514 00:17:07.084474       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:03.648097    4316 command_runner.go:130] ! I0514 00:17:07.085952       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:03.648097    4316 command_runner.go:130] ! I0514 00:17:07.086347       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:03.648097    4316 command_runner.go:130] ! I0514 00:17:07.086569       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:03.648097    4316 command_runner.go:130] ! I0514 00:17:07.088414       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:03.648162    4316 command_runner.go:130] ! I0514 00:17:07.088731       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:03.648188    4316 command_runner.go:130] ! I0514 00:17:07.089444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:03.648188    4316 command_runner.go:130] ! I0514 00:17:07.091486       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:03.648188    4316 command_runner.go:130] ! I0514 00:17:07.091650       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:03.648188    4316 command_runner.go:130] ! I0514 00:17:07.091678       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:03.648188    4316 command_runner.go:130] ! I0514 00:17:07.094570       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:03.648266    4316 command_runner.go:130] ! I0514 00:17:07.095467       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:03.648266    4316 command_runner.go:130] ! I0514 00:17:07.095818       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:03.648266    4316 command_runner.go:130] ! I0514 00:17:07.097778       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:03.648266    4316 command_runner.go:130] ! I0514 00:17:07.098911       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:03.648266    4316 command_runner.go:130] ! I0514 00:17:07.098939       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:03.648332    4316 command_runner.go:130] ! I0514 00:17:07.100648       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.101514       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.101659       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.103436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.103908       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.109194       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:03.648359    4316 command_runner.go:130] ! I0514 00:17:07.109267       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:03.648437    4316 command_runner.go:130] ! I0514 00:17:07.109496       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:03.648437    4316 command_runner.go:130] ! I0514 00:17:07.113760       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:03.648437    4316 command_runner.go:130] ! I0514 00:17:07.114024       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:03.648437    4316 command_runner.go:130] ! I0514 00:17:07.114252       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:03.648437    4316 command_runner.go:130] ! I0514 00:17:07.115259       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.116925       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.117254       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.117353       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.121368       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.121764       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.121788       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122156       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122248       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122301       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122371       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122432       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122464       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.122706       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.123282       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.123678       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.126535       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.126692       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:03.648502    4316 command_runner.go:130] ! E0514 00:17:07.165594       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.165634       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.218097       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.218271       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.218379       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.218721       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.265917       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.266033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.266045       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.315398       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.315511       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.315534       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.415899       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:03.648502    4316 command_runner.go:130] ! I0514 00:17:07.416022       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.465981       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.466026       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.466177       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.466545       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.516337       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:03.649045    4316 command_runner.go:130] ! I0514 00:17:07.516498       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:03.649124    4316 command_runner.go:130] ! I0514 00:17:07.516515       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:03.649124    4316 command_runner.go:130] ! I0514 00:17:07.567477       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:03.649124    4316 command_runner.go:130] ! I0514 00:17:07.567616       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:03.649124    4316 command_runner.go:130] ! I0514 00:17:07.567627       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:03.649175    4316 command_runner.go:130] ! I0514 00:17:07.617346       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:03.649175    4316 command_runner.go:130] ! I0514 00:17:07.617464       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:03.649175    4316 command_runner.go:130] ! I0514 00:17:07.617476       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:03.649175    4316 command_runner.go:130] ! E0514 00:17:07.665765       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:03.649175    4316 command_runner.go:130] ! I0514 00:17:07.665865       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:03.649372    4316 command_runner.go:130] ! I0514 00:17:07.665876       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:03.649372    4316 command_runner.go:130] ! I0514 00:17:07.671623       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:03.649372    4316 command_runner.go:130] ! I0514 00:17:07.693623       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:03.649372    4316 command_runner.go:130] ! I0514 00:17:07.703208       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:03.649454    4316 command_runner.go:130] ! I0514 00:17:07.707002       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:03.649454    4316 command_runner.go:130] ! I0514 00:17:07.707898       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:03.649454    4316 command_runner.go:130] ! I0514 00:17:07.708010       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:03.649454    4316 command_runner.go:130] ! I0514 00:17:07.708168       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:03.649513    4316 command_runner.go:130] ! I0514 00:17:07.710800       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:03.649513    4316 command_runner.go:130] ! I0514 00:17:07.710879       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:03.649513    4316 command_runner.go:130] ! I0514 00:17:07.716140       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.716709       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.717695       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.717710       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.718924       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.723267       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.723378       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:03.649549    4316 command_runner.go:130] ! I0514 00:17:07.723467       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:03.649628    4316 command_runner.go:130] ! I0514 00:17:07.723495       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:03.649628    4316 command_runner.go:130] ! I0514 00:17:07.726980       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:03.649628    4316 command_runner.go:130] ! I0514 00:17:07.733271       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:03.649628    4316 command_runner.go:130] ! I0514 00:17:07.733445       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:03.649628    4316 command_runner.go:130] ! I0514 00:17:07.733467       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.733473       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.733480       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.739996       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.742032       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.744959       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:03.649723    4316 command_runner.go:130] ! I0514 00:17:07.760453       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.762790       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.766175       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.767750       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.768151       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.779225       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.779406       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.784902       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.787441       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:03.649820    4316 command_runner.go:130] ! I0514 00:17:07.790178       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.791571       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.797318       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.816750       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.836762       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.837127       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:03.649908    4316 command_runner.go:130] ! I0514 00:17:07.869081       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:03.649969    4316 command_runner.go:130] ! I0514 00:17:07.869544       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:03.649969    4316 command_runner.go:130] ! I0514 00:17:07.869413       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.870789       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.898670       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.901033       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.904366       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.916125       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:03.650006    4316 command_runner.go:130] ! I0514 00:17:07.977330       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:07.988956       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.134754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="230.307102ms"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.134896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.6µs"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.140785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="234.508146ms"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.140977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.3µs"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.412419       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.472034       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:08.472384       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:17:37.878702       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:18:01.608725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.75856ms"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:18:01.608844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.702µs"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:18:01.651304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.008µs"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:18:01.710123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.783088ms"
	I0514 00:18:03.650073    4316 command_runner.go:130] ! I0514 00:18:01.711762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.302µs"
	I0514 00:18:03.663561    4316 logs.go:123] Gathering logs for kube-controller-manager [e96f94398d6d] ...
	I0514 00:18:03.663561    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e96f94398d6d"
	I0514 00:18:03.699380    4316 command_runner.go:130] ! I0513 23:56:04.448604       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.932336       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.932378       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.934044       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.934133       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.934796       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:04.935005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:03.700268    4316 command_runner.go:130] ! I0513 23:56:09.124957       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:03.700550    4316 command_runner.go:130] ! I0513 23:56:09.125092       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:03.700550    4316 command_runner.go:130] ! I0513 23:56:09.140996       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:03.700617    4316 command_runner.go:130] ! I0513 23:56:09.141447       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:03.700617    4316 command_runner.go:130] ! I0513 23:56:09.141567       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:03.700617    4316 command_runner.go:130] ! I0513 23:56:09.156847       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:03.700676    4316 command_runner.go:130] ! I0513 23:56:09.157241       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:03.700676    4316 command_runner.go:130] ! I0513 23:56:09.157455       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:03.700732    4316 command_runner.go:130] ! I0513 23:56:09.170795       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:03.700773    4316 command_runner.go:130] ! I0513 23:56:09.171005       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:03.700773    4316 command_runner.go:130] ! I0513 23:56:09.171684       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:03.700830    4316 command_runner.go:130] ! I0513 23:56:09.171921       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:03.700830    4316 command_runner.go:130] ! I0513 23:56:09.172144       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:03.700927    4316 command_runner.go:130] ! I0513 23:56:09.183975       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:03.700977    4316 command_runner.go:130] ! I0513 23:56:09.184362       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:03.700977    4316 command_runner.go:130] ! I0513 23:56:09.185233       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:03.701022    4316 command_runner.go:130] ! I0513 23:56:09.230173       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:03.701022    4316 command_runner.go:130] ! I0513 23:56:09.242679       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:03.701022    4316 command_runner.go:130] ! I0513 23:56:09.242735       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:03.701093    4316 command_runner.go:130] ! I0513 23:56:09.242821       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:03.701093    4316 command_runner.go:130] ! I0513 23:56:09.249513       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:03.701143    4316 command_runner.go:130] ! I0513 23:56:09.249614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:03.701143    4316 command_runner.go:130] ! I0513 23:56:09.249731       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:03.701207    4316 command_runner.go:130] ! I0513 23:56:09.249824       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:03.701207    4316 command_runner.go:130] ! I0513 23:56:09.249912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250425       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250604       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250745       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250794       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.250994       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.251028       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.251909       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.251999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.252142       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.305089       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.305302       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.305357       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.305376       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.321907       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.322244       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.322270       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.324160       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.324208       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! E0513 23:56:09.334850       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.335135       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.346530       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.346809       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.346883       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.385297       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:03.701269    4316 command_runner.go:130] ! I0513 23:56:09.385391       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:03.701808    4316 command_runner.go:130] ! I0513 23:56:09.385403       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:03.701867    4316 command_runner.go:130] ! I0513 23:56:09.542113       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:03.701867    4316 command_runner.go:130] ! I0513 23:56:09.542271       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:03.701930    4316 command_runner.go:130] ! I0513 23:56:09.542284       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:03.701930    4316 command_runner.go:130] ! I0513 23:56:09.581300       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:03.701989    4316 command_runner.go:130] ! I0513 23:56:09.581321       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:03.701989    4316 command_runner.go:130] ! I0513 23:56:09.581454       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.702050    4316 command_runner.go:130] ! I0513 23:56:09.581971       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:03.702125    4316 command_runner.go:130] ! I0513 23:56:09.582008       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:03.702125    4316 command_runner.go:130] ! I0513 23:56:09.582030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.702182    4316 command_runner.go:130] ! I0513 23:56:09.582896       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:03.702182    4316 command_runner.go:130] ! I0513 23:56:09.582908       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:03.702253    4316 command_runner.go:130] ! I0513 23:56:09.582922       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.702312    4316 command_runner.go:130] ! I0513 23:56:09.583436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:03.702312    4316 command_runner.go:130] ! I0513 23:56:09.583678       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:03.702374    4316 command_runner.go:130] ! I0513 23:56:09.583691       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:03.702374    4316 command_runner.go:130] ! I0513 23:56:09.583727       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:03.702450    4316 command_runner.go:130] ! I0513 23:56:09.734073       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:03.702450    4316 command_runner.go:130] ! I0513 23:56:09.734159       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:03.702516    4316 command_runner.go:130] ! I0513 23:56:09.734446       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:03.702516    4316 command_runner.go:130] ! I0513 23:56:09.885354       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:03.702574    4316 command_runner.go:130] ! I0513 23:56:09.885756       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:03.702574    4316 command_runner.go:130] ! I0513 23:56:09.885934       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:03.702631    4316 command_runner.go:130] ! I0513 23:56:10.040288       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:03.702631    4316 command_runner.go:130] ! I0513 23:56:10.040486       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:03.702681    4316 command_runner.go:130] ! I0513 23:56:20.090311       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:03.702737    4316 command_runner.go:130] ! I0513 23:56:20.090418       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:03.702737    4316 command_runner.go:130] ! I0513 23:56:20.090428       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:03.702800    4316 command_runner.go:130] ! I0513 23:56:20.090911       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:03.702800    4316 command_runner.go:130] ! I0513 23:56:20.091093       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:03.702859    4316 command_runner.go:130] ! I0513 23:56:20.101598       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:03.702859    4316 command_runner.go:130] ! I0513 23:56:20.101778       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:03.702909    4316 command_runner.go:130] ! I0513 23:56:20.101805       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:03.702909    4316 command_runner.go:130] ! I0513 23:56:20.114509       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:03.702964    4316 command_runner.go:130] ! I0513 23:56:20.114580       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:03.703013    4316 command_runner.go:130] ! I0513 23:56:20.114849       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:03.703013    4316 command_runner.go:130] ! I0513 23:56:20.114678       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:03.703068    4316 command_runner.go:130] ! I0513 23:56:20.115038       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:03.703068    4316 command_runner.go:130] ! I0513 23:56:20.115048       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:03.703116    4316 command_runner.go:130] ! E0513 23:56:20.117646       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:03.703183    4316 command_runner.go:130] ! I0513 23:56:20.117865       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:03.703183    4316 command_runner.go:130] ! I0513 23:56:20.130498       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:03.703232    4316 command_runner.go:130] ! I0513 23:56:20.130711       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:03.703232    4316 command_runner.go:130] ! I0513 23:56:20.130932       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:03.703281    4316 command_runner.go:130] ! I0513 23:56:20.143035       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:03.703321    4316 command_runner.go:130] ! I0513 23:56:20.143414       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:03.703371    4316 command_runner.go:130] ! I0513 23:56:20.143607       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:03.703371    4316 command_runner.go:130] ! I0513 23:56:20.160023       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:03.703454    4316 command_runner.go:130] ! I0513 23:56:20.160191       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:03.703475    4316 command_runner.go:130] ! I0513 23:56:20.160215       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:03.703514    4316 command_runner.go:130] ! I0513 23:56:20.170613       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:03.703569    4316 command_runner.go:130] ! I0513 23:56:20.170951       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:03.703609    4316 command_runner.go:130] ! I0513 23:56:20.171064       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:03.703660    4316 command_runner.go:130] ! I0513 23:56:20.179840       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:03.703706    4316 command_runner.go:130] ! I0513 23:56:20.180447       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:03.703706    4316 command_runner.go:130] ! I0513 23:56:20.180590       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:03.703741    4316 command_runner.go:130] ! I0513 23:56:20.190977       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:03.703781    4316 command_runner.go:130] ! I0513 23:56:20.191286       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:03.703781    4316 command_runner.go:130] ! I0513 23:56:20.191448       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:03.703828    4316 command_runner.go:130] ! I0513 23:56:20.204888       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:03.703913    4316 command_runner.go:130] ! I0513 23:56:20.205578       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:03.703963    4316 command_runner.go:130] ! I0513 23:56:20.205670       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:03.703963    4316 command_runner.go:130] ! I0513 23:56:20.239034       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:03.704004    4316 command_runner.go:130] ! I0513 23:56:20.239193       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:03.704004    4316 command_runner.go:130] ! I0513 23:56:20.239262       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:03.704084    4316 command_runner.go:130] ! I0513 23:56:20.482568       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:03.704084    4316 command_runner.go:130] ! I0513 23:56:20.486046       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:03.704137    4316 command_runner.go:130] ! I0513 23:56:20.486073       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:03.704177    4316 command_runner.go:130] ! I0513 23:56:20.486093       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:03.704177    4316 command_runner.go:130] ! I0513 23:56:20.786163       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:03.704255    4316 command_runner.go:130] ! I0513 23:56:20.786358       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:03.704255    4316 command_runner.go:130] ! I0513 23:56:21.082938       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:03.704304    4316 command_runner.go:130] ! I0513 23:56:21.083657       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:03.704346    4316 command_runner.go:130] ! I0513 23:56:21.083743       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:03.704391    4316 command_runner.go:130] ! I0513 23:56:21.238006       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:03.704425    4316 command_runner.go:130] ! I0513 23:56:21.238099       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:03.704516    4316 command_runner.go:130] ! I0513 23:56:21.238152       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:03.704562    4316 command_runner.go:130] ! I0513 23:56:21.238163       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:03.704562    4316 command_runner.go:130] ! I0513 23:56:21.283674       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:03.704596    4316 command_runner.go:130] ! I0513 23:56:21.283751       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:03.704596    4316 command_runner.go:130] ! I0513 23:56:21.283986       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:03.704644    4316 command_runner.go:130] ! I0513 23:56:21.284217       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:03.704686    4316 command_runner.go:130] ! I0513 23:56:21.442664       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:03.704686    4316 command_runner.go:130] ! I0513 23:56:21.442840       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:03.704733    4316 command_runner.go:130] ! I0513 23:56:21.442854       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:03.704733    4316 command_runner.go:130] ! I0513 23:56:21.587997       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:03.704766    4316 command_runner.go:130] ! I0513 23:56:21.588249       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:03.704815    4316 command_runner.go:130] ! I0513 23:56:21.588322       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:03.704856    4316 command_runner.go:130] ! I0513 23:56:21.740205       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:03.704856    4316 command_runner.go:130] ! I0513 23:56:21.740392       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:03.704901    4316 command_runner.go:130] ! I0513 23:56:21.740547       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:03.704933    4316 command_runner.go:130] ! I0513 23:56:21.889738       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:03.704933    4316 command_runner.go:130] ! I0513 23:56:21.890053       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:03.704981    4316 command_runner.go:130] ! I0513 23:56:21.890145       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:03.705024    4316 command_runner.go:130] ! I0513 23:56:22.038114       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:03.705024    4316 command_runner.go:130] ! I0513 23:56:22.038197       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:03.705024    4316 command_runner.go:130] ! I0513 23:56:22.038216       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:03.705079    4316 command_runner.go:130] ! I0513 23:56:22.038314       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:03.705129    4316 command_runner.go:130] ! I0513 23:56:22.038329       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:03.705129    4316 command_runner.go:130] ! I0513 23:56:22.291303       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:03.705185    4316 command_runner.go:130] ! I0513 23:56:22.291332       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:03.705185    4316 command_runner.go:130] ! I0513 23:56:22.291999       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:03.705234    4316 command_runner.go:130] ! I0513 23:56:22.299124       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:03.705234    4316 command_runner.go:130] ! I0513 23:56:22.317101       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:03.705289    4316 command_runner.go:130] ! I0513 23:56:22.321553       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:03.705338    4316 command_runner.go:130] ! I0513 23:56:22.322540       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:03.705338    4316 command_runner.go:130] ! I0513 23:56:22.335837       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:03.705393    4316 command_runner.go:130] ! I0513 23:56:22.339493       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:03.705393    4316 command_runner.go:130] ! I0513 23:56:22.339494       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:03.705444    4316 command_runner.go:130] ! I0513 23:56:22.339605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:03.705444    4316 command_runner.go:130] ! I0513 23:56:22.340940       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:03.705499    4316 command_runner.go:130] ! I0513 23:56:22.341044       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:03.705499    4316 command_runner.go:130] ! I0513 23:56:22.342309       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:03.705549    4316 command_runner.go:130] ! I0513 23:56:22.343675       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:03.705605    4316 command_runner.go:130] ! I0513 23:56:22.343828       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:03.705655    4316 command_runner.go:130] ! I0513 23:56:22.347539       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:03.705655    4316 command_runner.go:130] ! I0513 23:56:22.357773       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:03.705655    4316 command_runner.go:130] ! I0513 23:56:22.361377       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:03.705711    4316 command_runner.go:130] ! I0513 23:56:22.372019       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:03.705762    4316 command_runner.go:130] ! I0513 23:56:22.380620       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:03.705762    4316 command_runner.go:130] ! I0513 23:56:22.382092       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:03.705817    4316 command_runner.go:130] ! I0513 23:56:22.382250       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:03.705817    4316 command_runner.go:130] ! I0513 23:56:22.382979       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:03.705865    4316 command_runner.go:130] ! I0513 23:56:22.384565       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:03.705865    4316 command_runner.go:130] ! I0513 23:56:22.384604       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:03.705920    4316 command_runner.go:130] ! I0513 23:56:22.384724       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:03.705920    4316 command_runner.go:130] ! I0513 23:56:22.386009       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:03.705969    4316 command_runner.go:130] ! I0513 23:56:22.386117       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:03.706027    4316 command_runner.go:130] ! I0513 23:56:22.386299       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:03.706027    4316 command_runner.go:130] ! I0513 23:56:22.389103       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:03.706027    4316 command_runner.go:130] ! I0513 23:56:22.390596       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:03.706076    4316 command_runner.go:130] ! I0513 23:56:22.391278       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:03.706131    4316 command_runner.go:130] ! I0513 23:56:22.391538       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:03.706131    4316 command_runner.go:130] ! I0513 23:56:22.391663       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:03.706180    4316 command_runner.go:130] ! I0513 23:56:22.392031       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:03.706237    4316 command_runner.go:130] ! I0513 23:56:22.392207       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:03.706237    4316 command_runner.go:130] ! I0513 23:56:22.392242       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:03.706237    4316 command_runner.go:130] ! I0513 23:56:22.392249       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:03.706299    4316 command_runner.go:130] ! I0513 23:56:22.402105       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:03.706299    4316 command_runner.go:130] ! I0513 23:56:22.405500       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:03.706356    4316 command_runner.go:130] ! I0513 23:56:22.406927       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:03.706356    4316 command_runner.go:130] ! I0513 23:56:22.411111       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100" podCIDRs=["10.244.0.0/24"]
	I0514 00:18:03.706356    4316 command_runner.go:130] ! I0513 23:56:22.431075       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:03.706455    4316 command_runner.go:130] ! I0513 23:56:22.443663       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:03.706455    4316 command_runner.go:130] ! I0513 23:56:22.552382       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:03.706455    4316 command_runner.go:130] ! I0513 23:56:22.573274       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:03.706530    4316 command_runner.go:130] ! I0513 23:56:22.573442       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:03.706563    4316 command_runner.go:130] ! I0513 23:56:22.573935       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:03.706606    4316 command_runner.go:130] ! I0513 23:56:22.574179       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:03.706645    4316 command_runner.go:130] ! I0513 23:56:22.586849       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:03.706698    4316 command_runner.go:130] ! I0513 23:56:22.602574       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:03.706742    4316 command_runner.go:130] ! I0513 23:56:23.018846       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:03.706793    4316 command_runner.go:130] ! I0513 23:56:23.087540       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:03.706831    4316 command_runner.go:130] ! I0513 23:56:23.087631       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:03.706831    4316 command_runner.go:130] ! I0513 23:56:23.691681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="593.37356ms"
	I0514 00:18:03.706887    4316 command_runner.go:130] ! I0513 23:56:23.736584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.765409ms"
	I0514 00:18:03.706931    4316 command_runner.go:130] ! I0513 23:56:23.736691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.105µs"
	I0514 00:18:03.706993    4316 command_runner.go:130] ! I0513 23:56:23.741069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.307µs"
	I0514 00:18:03.706993    4316 command_runner.go:130] ! I0513 23:56:24.558346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.410112ms"
	I0514 00:18:03.707059    4316 command_runner.go:130] ! I0513 23:56:24.599621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.388659ms"
	I0514 00:18:03.707109    4316 command_runner.go:130] ! I0513 23:56:24.599778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.705µs"
	I0514 00:18:03.707160    4316 command_runner.go:130] ! I0513 23:56:35.460855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.604µs"
	I0514 00:18:03.707188    4316 command_runner.go:130] ! I0513 23:56:35.495875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.404µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:56:36.868700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.505µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:56:36.916603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.935352ms"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:56:36.917123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.803µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:56:37.577172       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:02.230067       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:02.246355       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m02" podCIDRs=["10.244.1.0/24"]
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:02.603699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:22.527169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:45.791856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.887671ms"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:45.808219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.096894ms"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:45.808747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.005µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:45.809833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.705µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:45.811263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.604µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:48.526617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.926472ms"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:48.529326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.302µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:48.555529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.972453ms"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0513 23:59:48.556317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.601µs"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:03:17.563212       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:03:17.565297       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:03:17.579900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.2.0/24"]
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:03:17.665892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:03:38.035898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:10:17.797191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:12:39.070271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707221    4316 command_runner.go:130] ! I0514 00:12:44.527915       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707760    4316 command_runner.go:130] ! I0514 00:12:44.528275       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:03.707816    4316 command_runner.go:130] ! I0514 00:12:44.543895       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.3.0/24"]
	I0514 00:18:03.707876    4316 command_runner.go:130] ! I0514 00:12:49.983419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707876    4316 command_runner.go:130] ! I0514 00:14:17.920991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:03.707922    4316 command_runner.go:130] ! I0514 00:14:33.013074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.740609ms"
	I0514 00:18:03.707922    4316 command_runner.go:130] ! I0514 00:14:33.013918       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.506µs"
	I0514 00:18:03.722425    4316 logs.go:123] Gathering logs for kindnet [b7d8d9a5e5ea] ...
	I0514 00:18:03.722425    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d8d9a5e5ea"
	I0514 00:18:03.745839    4316 command_runner.go:130] ! I0514 00:16:57.751233       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:03.746581    4316 command_runner.go:130] ! I0514 00:16:57.751585       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:03.746581    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:03.746581    4316 command_runner.go:130] ! I0514 00:16:57.752181       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:03.746581    4316 command_runner.go:130] ! I0514 00:16:57.752200       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:03.746581    4316 command_runner.go:130] ! I0514 00:16:57.752221       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:03.746581    4316 command_runner.go:130] ! I0514 00:17:01.123977       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! I0514 00:17:04.195495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! I0514 00:17:07.267636       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! I0514 00:17:10.339619       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! I0514 00:17:13.411801       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:03.746657    4316 command_runner.go:130] ! goroutine 1 [running]:
	I0514 00:18:03.746657    4316 command_runner.go:130] ! main.main()
	I0514 00:18:03.746657    4316 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0514 00:18:03.748337    4316 logs.go:123] Gathering logs for kube-apiserver [da9e6534cd87] ...
	I0514 00:18:03.748414    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e6534cd87"
	I0514 00:18:03.769734    4316 command_runner.go:130] ! I0514 00:16:52.020111       1 options.go:221] external host was not specified, using 172.23.102.122
	I0514 00:18:03.769734    4316 command_runner.go:130] ! I0514 00:16:52.031119       1 server.go:148] Version: v1.30.0
	I0514 00:18:03.769734    4316 command_runner.go:130] ! I0514 00:16:52.031201       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.769734    4316 command_runner.go:130] ! I0514 00:16:52.560170       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0514 00:18:03.769734    4316 command_runner.go:130] ! I0514 00:16:52.562027       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0514 00:18:03.770816    4316 command_runner.go:130] ! I0514 00:16:52.567323       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0514 00:18:03.770816    4316 command_runner.go:130] ! I0514 00:16:52.562214       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:03.770816    4316 command_runner.go:130] ! I0514 00:16:52.570134       1 instance.go:299] Using reconciler: lease
	I0514 00:18:03.770816    4316 command_runner.go:130] ! I0514 00:16:53.544464       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0514 00:18:03.770816    4316 command_runner.go:130] ! W0514 00:16:53.544866       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.770912    4316 command_runner.go:130] ! I0514 00:16:53.780904       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0514 00:18:03.770912    4316 command_runner.go:130] ! I0514 00:16:53.781233       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0514 00:18:03.770912    4316 command_runner.go:130] ! I0514 00:16:54.015006       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0514 00:18:03.770912    4316 command_runner.go:130] ! I0514 00:16:54.172205       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.186014       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.186188       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.186609       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.187573       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.187695       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.188811       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.190200       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.190309       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.190366       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.192283       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.192583       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.193726       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.193833       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.193842       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.194656       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.194769       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.194831       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.195773       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.200522       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.200808       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! W0514 00:16:54.201073       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771135    4316 command_runner.go:130] ! I0514 00:16:54.202173       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0514 00:18:03.771668    4316 command_runner.go:130] ! W0514 00:16:54.202352       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771668    4316 command_runner.go:130] ! W0514 00:16:54.202465       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771668    4316 command_runner.go:130] ! I0514 00:16:54.204036       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0514 00:18:03.771668    4316 command_runner.go:130] ! W0514 00:16:54.204232       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0514 00:18:03.771668    4316 command_runner.go:130] ! I0514 00:16:54.213708       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0514 00:18:03.771668    4316 command_runner.go:130] ! W0514 00:16:54.213869       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771759    4316 command_runner.go:130] ! W0514 00:16:54.213992       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771759    4316 command_runner.go:130] ! I0514 00:16:54.214976       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0514 00:18:03.771759    4316 command_runner.go:130] ! W0514 00:16:54.215217       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771808    4316 command_runner.go:130] ! W0514 00:16:54.215317       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771808    4316 command_runner.go:130] ! I0514 00:16:54.226860       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0514 00:18:03.771808    4316 command_runner.go:130] ! W0514 00:16:54.227134       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771860    4316 command_runner.go:130] ! W0514 00:16:54.227258       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.771860    4316 command_runner.go:130] ! I0514 00:16:54.230259       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0514 00:18:03.771907    4316 command_runner.go:130] ! I0514 00:16:54.232567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0514 00:18:03.771907    4316 command_runner.go:130] ! W0514 00:16:54.232734       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0514 00:18:03.771949    4316 command_runner.go:130] ! W0514 00:16:54.232824       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.771949    4316 command_runner.go:130] ! I0514 00:16:54.239186       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0514 00:18:03.771949    4316 command_runner.go:130] ! W0514 00:16:54.239294       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0514 00:18:03.771993    4316 command_runner.go:130] ! W0514 00:16:54.239304       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0514 00:18:03.771993    4316 command_runner.go:130] ! I0514 00:16:54.241605       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0514 00:18:03.771993    4316 command_runner.go:130] ! W0514 00:16:54.241703       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.772208    4316 command_runner.go:130] ! W0514 00:16:54.241712       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:03.772208    4316 command_runner.go:130] ! I0514 00:16:54.242373       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0514 00:18:03.772208    4316 command_runner.go:130] ! W0514 00:16:54.242466       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.772208    4316 command_runner.go:130] ! I0514 00:16:54.259244       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0514 00:18:03.772208    4316 command_runner.go:130] ! W0514 00:16:54.259536       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:03.772326    4316 command_runner.go:130] ! I0514 00:16:54.792225       1 secure_serving.go:213] Serving securely on [::]:8443
	I0514 00:18:03.772326    4316 command_runner.go:130] ! I0514 00:16:54.792432       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:03.772326    4316 command_runner.go:130] ! I0514 00:16:54.794552       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0514 00:18:03.772392    4316 command_runner.go:130] ! I0514 00:16:54.794677       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.794720       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.795157       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.795787       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.795995       1 controller.go:116] Starting legacy_token_tracking_controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.796042       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.796156       1 controller.go:78] Starting OpenAPI AggregationController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.796272       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.797969       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.798688       1 available_controller.go:423] Starting AvailableConditionController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.798701       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.799424       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.799667       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.799692       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.800971       1 aggregator.go:163] waiting for initial CRD sync...
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.792447       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.792459       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.792473       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812587       1 controller.go:139] Starting OpenAPI controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812611       1 controller.go:87] Starting OpenAPI V3 controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812626       1 naming_controller.go:291] Starting NamingConditionController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812640       1 establishing_controller.go:76] Starting EstablishingController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812660       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812674       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.812685       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.848957       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.849152       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.850275       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.850299       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.906495       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.938841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.950730       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0514 00:18:03.772420    4316 command_runner.go:130] ! I0514 00:16:54.950897       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0514 00:18:03.772983    4316 command_runner.go:130] ! I0514 00:16:54.951294       1 aggregator.go:165] initial CRD sync complete...
	I0514 00:18:03.772983    4316 command_runner.go:130] ! I0514 00:16:54.951545       1 autoregister_controller.go:141] Starting autoregister controller
	I0514 00:18:03.772983    4316 command_runner.go:130] ! I0514 00:16:54.951793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0514 00:18:03.772983    4316 command_runner.go:130] ! I0514 00:16:54.951875       1 cache.go:39] Caches are synced for autoregister controller
	I0514 00:18:03.772983    4316 command_runner.go:130] ! I0514 00:16:54.962299       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0514 00:18:03.773056    4316 command_runner.go:130] ! I0514 00:16:54.968027       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:03.773056    4316 command_runner.go:130] ! I0514 00:16:54.968302       1 policy_source.go:224] refreshing policies
	I0514 00:18:03.773056    4316 command_runner.go:130] ! I0514 00:16:54.997391       1 shared_informer.go:320] Caches are synced for configmaps
	I0514 00:18:03.773115    4316 command_runner.go:130] ! I0514 00:16:54.999391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 00:18:03.773115    4316 command_runner.go:130] ! I0514 00:16:54.999732       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0514 00:18:03.773115    4316 command_runner.go:130] ! I0514 00:16:54.999871       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0514 00:18:03.773167    4316 command_runner.go:130] ! I0514 00:16:55.037244       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 00:18:03.773167    4316 command_runner.go:130] ! I0514 00:16:55.824524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0514 00:18:03.773167    4316 command_runner.go:130] ! W0514 00:16:56.521956       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122 172.23.106.39]
	I0514 00:18:03.773214    4316 command_runner.go:130] ! I0514 00:16:56.523614       1 controller.go:615] quota admission added evaluator for: endpoints
	I0514 00:18:03.773214    4316 command_runner.go:130] ! I0514 00:16:56.536716       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 00:18:03.773257    4316 command_runner.go:130] ! I0514 00:16:57.861026       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0514 00:18:03.773257    4316 command_runner.go:130] ! I0514 00:16:58.068043       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0514 00:18:03.773257    4316 command_runner.go:130] ! I0514 00:16:58.085925       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0514 00:18:03.773303    4316 command_runner.go:130] ! I0514 00:16:58.189328       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 00:18:03.773303    4316 command_runner.go:130] ! I0514 00:16:58.200849       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0514 00:18:03.773303    4316 command_runner.go:130] ! W0514 00:17:16.528300       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122]
	I0514 00:18:03.782570    4316 logs.go:123] Gathering logs for coredns [dcc5a109288b] ...
	I0514 00:18:03.782570    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc5a109288b"
	I0514 00:18:03.806231    4316 command_runner.go:130] > .:53
	I0514 00:18:03.806231    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:03.806231    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:03.806231    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:03.806231    4316 command_runner.go:130] > [INFO] 127.0.0.1:53257 - 27032 "HINFO IN 6976640239659908905.245956973392320689. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05278328s
	I0514 00:18:03.806493    4316 logs.go:123] Gathering logs for kube-proxy [91edaaa00da2] ...
	I0514 00:18:03.806493    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91edaaa00da2"
	I0514 00:18:03.829958    4316 command_runner.go:130] ! I0513 23:56:24.901713       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:03.829958    4316 command_runner.go:130] ! I0513 23:56:24.929714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.106.39"]
	I0514 00:18:03.830673    4316 command_runner.go:130] ! I0513 23:56:24.982680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:03.830722    4316 command_runner.go:130] ! I0513 23:56:24.982795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:03.830722    4316 command_runner.go:130] ! I0513 23:56:24.982816       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:03.830787    4316 command_runner.go:130] ! I0513 23:56:24.988669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:03.830835    4316 command_runner.go:130] ! I0513 23:56:24.989566       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:03.830864    4316 command_runner.go:130] ! I0513 23:56:24.989671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:03.830864    4316 command_runner.go:130] ! I0513 23:56:24.992700       1 config.go:192] "Starting service config controller"
	I0514 00:18:03.830864    4316 command_runner.go:130] ! I0513 23:56:24.993131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:03.830864    4316 command_runner.go:130] ! I0513 23:56:24.993327       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:03.830952    4316 command_runner.go:130] ! I0513 23:56:24.993339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:03.830990    4316 command_runner.go:130] ! I0513 23:56:24.994714       1 config.go:319] "Starting node config controller"
	I0514 00:18:03.830990    4316 command_runner.go:130] ! I0513 23:56:24.994744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:03.830990    4316 command_runner.go:130] ! I0513 23:56:25.094420       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:03.831082    4316 command_runner.go:130] ! I0513 23:56:25.094530       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:03.831082    4316 command_runner.go:130] ! I0513 23:56:25.094981       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:06.348723    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:18:06.370900    4316 command_runner.go:130] > 1838
	I0514 00:18:06.371311    4316 api_server.go:72] duration metric: took 1m6.6979187s to wait for apiserver process to appear ...
	I0514 00:18:06.371311    4316 api_server.go:88] waiting for apiserver healthz status ...
	I0514 00:18:06.377504    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0514 00:18:06.399039    4316 command_runner.go:130] > da9e6534cd87
	I0514 00:18:06.399039    4316 logs.go:276] 1 containers: [da9e6534cd87]
	I0514 00:18:06.409402    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0514 00:18:06.427536    4316 command_runner.go:130] > 08450c853590
	I0514 00:18:06.427536    4316 logs.go:276] 1 containers: [08450c853590]
	I0514 00:18:06.433810    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0514 00:18:06.454065    4316 command_runner.go:130] > dcc5a109288b
	I0514 00:18:06.454065    4316 command_runner.go:130] > 76c5ab7859ef
	I0514 00:18:06.454965    4316 logs.go:276] 2 containers: [dcc5a109288b 76c5ab7859ef]
	I0514 00:18:06.462871    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0514 00:18:06.481938    4316 command_runner.go:130] > d3581c1c570c
	I0514 00:18:06.482759    4316 command_runner.go:130] > 964887fc5d36
	I0514 00:18:06.482759    4316 logs.go:276] 2 containers: [d3581c1c570c 964887fc5d36]
	I0514 00:18:06.490925    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0514 00:18:06.513757    4316 command_runner.go:130] > b2a1b31cd7de
	I0514 00:18:06.513757    4316 command_runner.go:130] > 91edaaa00da2
	I0514 00:18:06.513757    4316 logs.go:276] 2 containers: [b2a1b31cd7de 91edaaa00da2]
	I0514 00:18:06.521144    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0514 00:18:06.544950    4316 command_runner.go:130] > b87239d1199a
	I0514 00:18:06.544950    4316 command_runner.go:130] > e96f94398d6d
	I0514 00:18:06.544950    4316 logs.go:276] 2 containers: [b87239d1199a e96f94398d6d]
	I0514 00:18:06.551406    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0514 00:18:06.571695    4316 command_runner.go:130] > 2b424a7cd98c
	I0514 00:18:06.572338    4316 command_runner.go:130] > b7d8d9a5e5ea
	I0514 00:18:06.572338    4316 logs.go:276] 2 containers: [2b424a7cd98c b7d8d9a5e5ea]
	I0514 00:18:06.572459    4316 logs.go:123] Gathering logs for kubelet ...
	I0514 00:18:06.572459    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507609    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507660    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.508230    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: E0514 00:16:46.508906    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229791    1441 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229941    1441 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.230764    1441 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: E0514 00:16:47.231303    1441 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717000    1520 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717452    1520 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717850    1520 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.719747    1520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.734764    1520 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754342    1520 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754443    1520 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0514 00:18:06.605987    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755707    1520 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0514 00:18:06.606557    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755788    1520 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-101100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0514 00:18:06.606557    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756671    1520 topology_manager.go:138] "Creating topology manager with none policy"
	I0514 00:18:06.606607    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756747    1520 container_manager_linux.go:301] "Creating device plugin manager"
	I0514 00:18:06.606648    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.757344    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:06.606648    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.758885    1520 kubelet.go:400] "Attempting to sync node with API server"
	I0514 00:18:06.606684    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759591    1520 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0514 00:18:06.606684    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759727    1520 kubelet.go:312] "Adding apiserver pod source"
	I0514 00:18:06.606723    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.760630    1520 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0514 00:18:06.606759    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.765370    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.606798    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.765512    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.606833    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.767039    1520 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0514 00:18:06.606872    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.771297    1520 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0514 00:18:06.606907    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.771834    1520 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0514 00:18:06.606907    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.773545    1520 server.go:1264] "Started kubelet"
	I0514 00:18:06.606946    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.773829    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.606981    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.774013    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.607092    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.780360    1520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.102.122:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-101100.17cf32c62bf0274b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-101100,UID:multinode-101100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-101100,},FirstTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,LastTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-1
01100,}"
	I0514 00:18:06.607127    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.781297    1520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0514 00:18:06.607164    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.786484    1520 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0514 00:18:06.607164    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.787784    1520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0514 00:18:06.607164    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.792005    1520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0514 00:18:06.607164    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.800317    1520 server.go:455] "Adding debug handlers to kubelet server"
	I0514 00:18:06.607254    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805202    1520 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0514 00:18:06.607254    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805290    1520 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0514 00:18:06.607254    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812186    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="200ms"
	I0514 00:18:06.607341    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.812333    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.607372    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812369    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.607372    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816781    1520 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0514 00:18:06.607422    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816881    1520 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0514 00:18:06.607422    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816892    1520 factory.go:221] Registration of the systemd container factory successfully
	I0514 00:18:06.607422    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849206    1520 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0514 00:18:06.607483    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849426    1520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0514 00:18:06.607483    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849585    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:06.607483    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850764    1520 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0514 00:18:06.607483    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850799    1520 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0514 00:18:06.607483    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850826    1520 policy_none.go:49] "None policy: Start"
	I0514 00:18:06.607544    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.855604    1520 reconciler.go:26] "Reconciler: start to sync state"
	I0514 00:18:06.607544    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884024    1520 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0514 00:18:06.607544    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884165    1520 state_mem.go:35] "Initializing new in-memory state store"
	I0514 00:18:06.607544    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.886215    1520 state_mem.go:75] "Updated machine memory state"
	I0514 00:18:06.607544    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888657    1520 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0514 00:18:06.607615    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888839    1520 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0514 00:18:06.607615    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.891306    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0514 00:18:06.607646    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.897961    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0514 00:18:06.607646    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898040    1520 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0514 00:18:06.607646    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898088    1520 kubelet.go:2337] "Starting kubelet main sync loop"
	I0514 00:18:06.607646    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.898127    1520 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0514 00:18:06.609192    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898551    1520 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0514 00:18:06.609246    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.899218    1520 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-101100\" not found"
	I0514 00:18:06.609334    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.900215    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.609365    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.900324    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.609365    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.907443    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:06.609433    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.909152    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:06.609463    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.912132    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:06.609463    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:06.609511    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:06.609511    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:06.609573    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:06.609573    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999139    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7c140951f4f8270da243f55135e9f108f3cdf5ef11a4e990e06822ace5adbd"
	I0514 00:18:06.609658    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999762    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d7537422a83c9a57ab3bed978e87441e2725a75ebc91f5cad3319d11d4ea18"
	I0514 00:18:06.609686    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999846    1520 topology_manager.go:215] "Topology Admit Handler" podUID="378d61cf78af695f1df41e321907a84d" podNamespace="kube-system" podName="kube-apiserver-multinode-101100"
	I0514 00:18:06.609751    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.000880    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5393de2704b2efef461d22fa52aa93c8" podNamespace="kube-system" podName="kube-controller-manager-multinode-101100"
	I0514 00:18:06.609779    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.002201    1520 topology_manager.go:215] "Topology Admit Handler" podUID="8083abd658221f47cabf81a00c4ca98e" podNamespace="kube-system" podName="kube-scheduler-multinode-101100"
	I0514 00:18:06.609779    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.004707    1520 topology_manager.go:215] "Topology Admit Handler" podUID="62d8afc7714e8ab65bff9675d120bb67" podNamespace="kube-system" podName="etcd-multinode-101100"
	I0514 00:18:06.609821    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007687    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb3b27edcd2a44b67fad4a74f438a62eec78b20422f6f952396053574dfb97e"
	I0514 00:18:06.609821    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007796    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da9268fd6556bae4d0109c5065588160bcf737c35e1e5df738d31786425c22ff"
	I0514 00:18:06.609898    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007891    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd694480978f356b61313108a6ff716a8d5f6e854fea1e4aa89a76a68d049f0"
	I0514 00:18:06.609898    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007938    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287e744a4dc2e511f4e40696c7d3b4193896c0c40a5bb527e569d1d3ec2cb908"
	I0514 00:18:06.609898    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.013966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad0550a5dabf16106fc2956251a65bccdc32f3f3be1f27246f675964fd548a1f"
	I0514 00:18:06.609989    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.014759    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="400ms"
	I0514 00:18:06.609989    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.031437    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2"
	I0514 00:18:06.610049    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.048649    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697"
	I0514 00:18:06.610049    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074775    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-ca-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:06.610135    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074859    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:06.610179    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074906    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-k8s-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.610179    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074943    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.610239    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074981    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-certs\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:06.610298    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075015    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-data\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:06.610298    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075045    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-k8s-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:06.610383    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075248    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-ca-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.610413    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075285    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-flexvolume-dir\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.610456    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075316    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-kubeconfig\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.610527    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075345    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8083abd658221f47cabf81a00c4ca98e-kubeconfig\") pod \"kube-scheduler-multinode-101100\" (UID: \"8083abd658221f47cabf81a00c4ca98e\") " pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:06.610527    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.111262    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:06.610527    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.112979    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:06.610588    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.416229    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="800ms"
	I0514 00:18:06.610588    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.515338    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:06.610657    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.516940    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:06.610657    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: W0514 00:16:50.730920    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610733    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.730993    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610733    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.074200    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610794    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.074270    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610850    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.076835    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b"
	I0514 00:18:06.610850    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.081775    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610940    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.081938    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.610973    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.108133    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3"
	I0514 00:18:06.610973    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.218458    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="1.6s"
	I0514 00:18:06.611042    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.318715    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:06.611079    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.319804    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:06.611079    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.367337    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.611116    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.367409    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:06.611116    4316 command_runner.go:130] > May 14 00:16:52 multinode-101100 kubelet[1520]: I0514 00:16:52.921237    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:06.611181    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086028    1520 kubelet_node_status.go:112] "Node was previously registered" node="multinode-101100"
	I0514 00:18:06.611181    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.086698    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-101100\" already exists" pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:06.611181    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086743    1520 kubelet_node_status.go:76] "Successfully registered node" node="multinode-101100"
	I0514 00:18:06.611237    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.088971    1520 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0514 00:18:06.611237    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.090614    1520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0514 00:18:06.611237    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.091996    1520 setters.go:580] "Node became not ready" node="multinode-101100" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-14T00:16:55Z","lastTransitionTime":"2024-05-14T00:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0514 00:18:06.611318    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.783435    1520 apiserver.go:52] "Watching apiserver"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788503    1520 topology_manager.go:215] "Topology Admit Handler" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4kmx4"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788795    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0" podNamespace="kube-system" podName="kindnet-9q2tv"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788932    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a9a488af-41ba-47f3-87b0-5a2f062afad6" podNamespace="kube-system" podName="kube-proxy-zhcz6"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789028    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247" podNamespace="kube-system" podName="storage-provisioner"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789184    1520 topology_manager.go:215] "Topology Admit Handler" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae" podNamespace="default" podName="busybox-fc5497c4f-xqj6w"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.789553    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789850    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-101100" podUID="1d9c79a4-1e4a-46fb-b3e8-02a4775f40af"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.790329    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-101100" podUID="cd31d030-75f8-4abb-bcad-34031cec7aa6"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.794088    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.798934    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-multinode-101100\" already exists" pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.809466    1520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0514 00:18:06.611396    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.835196    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-101100"
	I0514 00:18:06.611924    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857783    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-cni-cfg\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:06.611967    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857845    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-xtables-lock\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:06.612026    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857866    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-xtables-lock\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:06.612088    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857954    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-lib-modules\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:06.612111    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858020    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a92f04b8-a93f-42d8-81d7-d4da6bf2e247-tmp\") pod \"storage-provisioner\" (UID: \"a92f04b8-a93f-42d8-81d7-d4da6bf2e247\") " pod="kube-system/storage-provisioner"
	I0514 00:18:06.612176    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858051    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-lib-modules\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:06.612176    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859176    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.612225    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859325    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.359260421 +0000 UTC m=+6.710289036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.612290    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.873841    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:06.612290    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.907826    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d9b35578220c9e99f77722d9aa294f" path="/var/lib/kubelet/pods/03d9b35578220c9e99f77722d9aa294f/volumes"
	I0514 00:18:06.612360    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.910490    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af4b764a5249ff25d3c1c709387c273" path="/var/lib/kubelet/pods/1af4b764a5249ff25d3c1c709387c273/volumes"
	I0514 00:18:06.612360    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917375    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612415    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917415    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612461    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917466    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.417450852 +0000 UTC m=+6.768479567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612512    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.964380    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-101100" podStartSLOduration=0.9643304 podStartE2EDuration="964.3304ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.964174289 +0000 UTC m=+6.315203004" watchObservedRunningTime="2024-05-14 00:16:55.9643304 +0000 UTC m=+6.315359015"
	I0514 00:18:06.612572    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.985118    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-101100" podStartSLOduration=0.985100539 podStartE2EDuration="985.100539ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.984806519 +0000 UTC m=+6.335835134" watchObservedRunningTime="2024-05-14 00:16:55.985100539 +0000 UTC m=+6.336129154"
	I0514 00:18:06.612624    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.362973    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.612684    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.363041    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.363025821 +0000 UTC m=+7.714054436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.612684    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463836    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612684    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463868    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612799    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463923    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.46390701 +0000 UTC m=+7.814935725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.377986    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.378101    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.378049439 +0000 UTC m=+9.729078054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478290    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478356    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478448    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.478431994 +0000 UTC m=+9.829460709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899119    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899678    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.394980    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.395173    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.39515828 +0000 UTC m=+13.746186895 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496260    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496313    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496438    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.496350091 +0000 UTC m=+13.847378806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.891391    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.901591    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.914896    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.612825    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.898894    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.613349    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.899345    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.613349    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445887    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.613425    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445965    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.44595071 +0000 UTC m=+21.796979425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.613457    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547258    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.613457    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547292    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547346    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.547331033 +0000 UTC m=+21.898359648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.899515    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.900290    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:04 multinode-101100 kubelet[1520]: E0514 00:17:04.893282    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900260    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899212    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899658    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.895008    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899381    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899884    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508629    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508833    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.508813455 +0000 UTC m=+37.859842170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609334    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609455    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.613514    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609579    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.609562102 +0000 UTC m=+37.960590817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.614047    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899431    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614087    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899749    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614131    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.898578    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614131    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.899676    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614184    4316 command_runner.go:130] > May 14 00:17:14 multinode-101100 kubelet[1520]: E0514 00:17:14.897029    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.614184    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.899665    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614259    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.900476    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614259    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.766386    1520 scope.go:117] "RemoveContainer" containerID="9c4eb727cedb65853cc3a94fdcc3e267ed41cd9cb15ef1cc1bb84f6f2278c9c4"
	I0514 00:18:06.614310    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.767364    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:06.614310    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.767901    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-9q2tv_kube-system(5b3ee167-f21f-46b3-bace-03a7233717e0)\"" pod="kube-system/kindnet-9q2tv" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.898891    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.899300    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.898102    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899045    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899315    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900488    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900677    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899091    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899625    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:24 multinode-101100 kubelet[1520]: E0514 00:17:24.899382    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900463    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900948    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614379    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550622    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550839    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.550821042 +0000 UTC m=+69.901849657 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651942    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651988    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.652038    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.652024653 +0000 UTC m=+70.003053368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.900302    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.901190    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.901408    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.913749    1520 scope.go:117] "RemoveContainer" containerID="e6ee22ee5c1b88cb0b1190c646094aefe229bfbd4486f007cde2b36da39ca886"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.914050    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.914651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a92f04b8-a93f-42d8-81d7-d4da6bf2e247)\"" pod="kube-system/storage-provisioner" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.898652    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.899154    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.900744    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.900407    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.902295    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.898560    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.899627    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:39 multinode-101100 kubelet[1520]: I0514 00:17:39.899892    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.888753    1520 scope.go:117] "RemoveContainer" containerID="eda79d47d28ffbc726bec7eaad072eeebb31ec439ed9bbe9fd544b9913b8f3ea"
	I0514 00:18:06.614933    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: E0514 00:17:49.924547    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:06.615452    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:06.615452    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:06.615492    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:06.615492    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:06.615492    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.932695    1520 scope.go:117] "RemoveContainer" containerID="06f1a683cad8348fc4f8e339f226bbda12c4e8c1025c7acb52e2792253dd3008"
	I0514 00:18:06.615492    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.478966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d"
	I0514 00:18:06.615492    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.543407    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d"
	I0514 00:18:06.654604    4316 logs.go:123] Gathering logs for kube-scheduler [964887fc5d36] ...
	I0514 00:18:06.654604    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964887fc5d36"
	I0514 00:18:06.689073    4316 command_runner.go:130] ! I0513 23:56:04.693680       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:06.689470    4316 command_runner.go:130] ! W0513 23:56:06.133341       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:06.689572    4316 command_runner.go:130] ! W0513 23:56:06.133396       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.689642    4316 command_runner.go:130] ! W0513 23:56:06.133407       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:06.689642    4316 command_runner.go:130] ! W0513 23:56:06.133415       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:06.689710    4316 command_runner.go:130] ! I0513 23:56:06.170291       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:06.689710    4316 command_runner.go:130] ! I0513 23:56:06.170533       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.689763    4316 command_runner.go:130] ! I0513 23:56:06.174536       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:06.689797    4316 command_runner.go:130] ! I0513 23:56:06.174684       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:06.689797    4316 command_runner.go:130] ! I0513 23:56:06.174703       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:06.689797    4316 command_runner.go:130] ! I0513 23:56:06.174918       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:06.689868    4316 command_runner.go:130] ! W0513 23:56:06.182722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:06.689932    4316 command_runner.go:130] ! E0513 23:56:06.186053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:06.689990    4316 command_runner.go:130] ! W0513 23:56:06.183583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.690062    4316 command_runner.go:130] ! W0513 23:56:06.183698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:06.690062    4316 command_runner.go:130] ! W0513 23:56:06.183781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:06.690180    4316 command_runner.go:130] ! W0513 23:56:06.183835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:06.690239    4316 command_runner.go:130] ! W0513 23:56:06.183868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:06.690239    4316 command_runner.go:130] ! W0513 23:56:06.184039       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.690339    4316 command_runner.go:130] ! W0513 23:56:06.186929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.690396    4316 command_runner.go:130] ! W0513 23:56:06.186969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.690517    4316 command_runner.go:130] ! W0513 23:56:06.187026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:06.690588    4316 command_runner.go:130] ! E0513 23:56:06.188647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:06.690641    4316 command_runner.go:130] ! E0513 23:56:06.188112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.690703    4316 command_runner.go:130] ! E0513 23:56:06.188121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:06.690762    4316 command_runner.go:130] ! E0513 23:56:06.188233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:06.690835    4316 command_runner.go:130] ! E0513 23:56:06.188242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:06.690925    4316 command_runner.go:130] ! E0513 23:56:06.189252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:06.690969    4316 command_runner.go:130] ! E0513 23:56:06.189533       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.691063    4316 command_runner.go:130] ! E0513 23:56:06.189643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691104    4316 command_runner.go:130] ! E0513 23:56:06.189773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691191    4316 command_runner.go:130] ! W0513 23:56:06.190106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:06.691256    4316 command_runner.go:130] ! E0513 23:56:06.190324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:06.691320    4316 command_runner.go:130] ! W0513 23:56:06.190538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:06.691464    4316 command_runner.go:130] ! E0513 23:56:06.191036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:06.691464    4316 command_runner.go:130] ! W0513 23:56:06.191581       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:06.691517    4316 command_runner.go:130] ! E0513 23:56:06.192160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:06.691555    4316 command_runner.go:130] ! W0513 23:56:06.191626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691603    4316 command_runner.go:130] ! E0513 23:56:06.192721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691603    4316 command_runner.go:130] ! W0513 23:56:06.190821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:06.691643    4316 command_runner.go:130] ! E0513 23:56:06.193134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:06.691643    4316 command_runner.go:130] ! W0513 23:56:07.154218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:06.691703    4316 command_runner.go:130] ! E0513 23:56:07.155376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:06.691703    4316 command_runner.go:130] ! W0513 23:56:07.229548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:06.691760    4316 command_runner.go:130] ! E0513 23:56:07.229613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:06.691760    4316 command_runner.go:130] ! W0513 23:56:07.344429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691760    4316 command_runner.go:130] ! E0513 23:56:07.344853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.691824    4316 command_runner.go:130] ! W0513 23:56:07.410556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:06.691883    4316 command_runner.go:130] ! E0513 23:56:07.410716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:06.691883    4316 command_runner.go:130] ! W0513 23:56:07.423084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:06.691960    4316 command_runner.go:130] ! E0513 23:56:07.423126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:06.691960    4316 command_runner.go:130] ! W0513 23:56:07.467897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:06.691998    4316 command_runner.go:130] ! E0513 23:56:07.467939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.484903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.485019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.545758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.546087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.573884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.573980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.633780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.633901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.680821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.680938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.704130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.704357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.736914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.737079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.692028    4316 command_runner.go:130] ! W0513 23:56:07.754367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:06.692028    4316 command_runner.go:130] ! E0513 23:56:07.754798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:06.692560    4316 command_runner.go:130] ! I0513 23:56:09.676327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:06.692560    4316 command_runner.go:130] ! E0514 00:14:35.689344       1 run.go:74] "command failed" err="finished without leader elect"
	I0514 00:18:06.700984    4316 logs.go:123] Gathering logs for kube-controller-manager [e96f94398d6d] ...
	I0514 00:18:06.700984    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e96f94398d6d"
	I0514 00:18:06.722879    4316 command_runner.go:130] ! I0513 23:56:04.448604       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:06.722879    4316 command_runner.go:130] ! I0513 23:56:04.932336       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:06.722879    4316 command_runner.go:130] ! I0513 23:56:04.932378       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.723442    4316 command_runner.go:130] ! I0513 23:56:04.934044       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:04.934133       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:04.934796       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:04.935005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.124957       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.125092       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.140996       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.141447       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.141567       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:06.723511    4316 command_runner.go:130] ! I0513 23:56:09.156847       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:06.723633    4316 command_runner.go:130] ! I0513 23:56:09.157241       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:06.723633    4316 command_runner.go:130] ! I0513 23:56:09.157455       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:06.723633    4316 command_runner.go:130] ! I0513 23:56:09.170795       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:06.723633    4316 command_runner.go:130] ! I0513 23:56:09.171005       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:06.723719    4316 command_runner.go:130] ! I0513 23:56:09.171684       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:06.726821    4316 command_runner.go:130] ! I0513 23:56:09.171921       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:06.726883    4316 command_runner.go:130] ! I0513 23:56:09.172144       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:06.726883    4316 command_runner.go:130] ! I0513 23:56:09.183975       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:06.726883    4316 command_runner.go:130] ! I0513 23:56:09.184362       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:06.726883    4316 command_runner.go:130] ! I0513 23:56:09.185233       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:06.726883    4316 command_runner.go:130] ! I0513 23:56:09.230173       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:06.726940    4316 command_runner.go:130] ! I0513 23:56:09.242679       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:06.726940    4316 command_runner.go:130] ! I0513 23:56:09.242735       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:06.726940    4316 command_runner.go:130] ! I0513 23:56:09.242821       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:06.726940    4316 command_runner.go:130] ! I0513 23:56:09.249513       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:06.727001    4316 command_runner.go:130] ! I0513 23:56:09.249614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:06.727001    4316 command_runner.go:130] ! I0513 23:56:09.249731       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:06.727001    4316 command_runner.go:130] ! I0513 23:56:09.249824       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:06.727066    4316 command_runner.go:130] ! I0513 23:56:09.249912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:06.727121    4316 command_runner.go:130] ! I0513 23:56:09.250132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:06.727121    4316 command_runner.go:130] ! I0513 23:56:09.250216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:06.727121    4316 command_runner.go:130] ! I0513 23:56:09.250270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:06.727177    4316 command_runner.go:130] ! I0513 23:56:09.250425       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:06.727177    4316 command_runner.go:130] ! I0513 23:56:09.250604       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:06.727177    4316 command_runner.go:130] ! I0513 23:56:09.250656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:06.727273    4316 command_runner.go:130] ! I0513 23:56:09.250695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:06.727273    4316 command_runner.go:130] ! I0513 23:56:09.250745       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:06.727273    4316 command_runner.go:130] ! I0513 23:56:09.250794       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:06.727273    4316 command_runner.go:130] ! I0513 23:56:09.250851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:06.727340    4316 command_runner.go:130] ! I0513 23:56:09.250883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:06.727340    4316 command_runner.go:130] ! I0513 23:56:09.250994       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:06.727340    4316 command_runner.go:130] ! I0513 23:56:09.251028       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:06.727340    4316 command_runner.go:130] ! I0513 23:56:09.251909       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:06.727340    4316 command_runner.go:130] ! I0513 23:56:09.251999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:06.727402    4316 command_runner.go:130] ! I0513 23:56:09.252142       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:06.727402    4316 command_runner.go:130] ! I0513 23:56:09.305089       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:06.727402    4316 command_runner.go:130] ! I0513 23:56:09.305302       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:06.727402    4316 command_runner.go:130] ! I0513 23:56:09.305357       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:06.727467    4316 command_runner.go:130] ! I0513 23:56:09.305376       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:06.727467    4316 command_runner.go:130] ! I0513 23:56:09.321907       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:06.727467    4316 command_runner.go:130] ! I0513 23:56:09.322244       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:06.727467    4316 command_runner.go:130] ! I0513 23:56:09.322270       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:06.727467    4316 command_runner.go:130] ! I0513 23:56:09.324160       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:06.727528    4316 command_runner.go:130] ! I0513 23:56:09.324208       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:06.727528    4316 command_runner.go:130] ! E0513 23:56:09.334850       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:06.727528    4316 command_runner.go:130] ! I0513 23:56:09.335135       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:06.727593    4316 command_runner.go:130] ! I0513 23:56:09.346530       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:06.727593    4316 command_runner.go:130] ! I0513 23:56:09.346809       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:06.727593    4316 command_runner.go:130] ! I0513 23:56:09.346883       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.385297       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.385391       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.385403       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.542113       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.542271       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.542284       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.581300       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.581321       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.581454       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.581971       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:06.730663    4316 command_runner.go:130] ! I0513 23:56:09.582008       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:06.731204    4316 command_runner.go:130] ! I0513 23:56:09.582030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:06.731204    4316 command_runner.go:130] ! I0513 23:56:09.582896       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:06.731204    4316 command_runner.go:130] ! I0513 23:56:09.582908       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:06.731277    4316 command_runner.go:130] ! I0513 23:56:09.582922       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:06.731277    4316 command_runner.go:130] ! I0513 23:56:09.583436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:06.731277    4316 command_runner.go:130] ! I0513 23:56:09.583678       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:06.731277    4316 command_runner.go:130] ! I0513 23:56:09.583691       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:06.731339    4316 command_runner.go:130] ! I0513 23:56:09.583727       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:06.731339    4316 command_runner.go:130] ! I0513 23:56:09.734073       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:09.734159       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:09.734446       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:09.885354       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:09.885756       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:09.885934       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:10.040288       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:10.040486       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.090311       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.090418       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.090428       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.090911       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.091093       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.101598       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.101778       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:06.731394    4316 command_runner.go:130] ! I0513 23:56:20.101805       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.114509       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.114580       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.114849       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.114678       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.115038       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.115048       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:06.733652    4316 command_runner.go:130] ! E0513 23:56:20.117646       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.117865       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.130498       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.130711       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.130932       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.143035       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.143414       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.143607       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.160023       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.160191       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.160215       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.170613       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.170951       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.171064       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.179840       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.180447       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.180590       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.190977       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.191286       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.191448       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.204888       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.205578       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.205670       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.239034       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.239193       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.239262       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.482568       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.486046       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.486073       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.486093       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.786163       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:20.786358       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.082938       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.083657       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.083743       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.238006       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.238099       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.238152       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.238163       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.283674       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.283751       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.283986       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.284217       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.442664       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.442840       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.442854       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.587997       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.588249       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.588322       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.740205       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.740392       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.740547       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.889738       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.890053       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:21.890145       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.038114       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.038197       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.038216       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.038314       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.038329       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.291303       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.291332       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.291999       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.299124       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.317101       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.321553       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.322540       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.335837       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.339493       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.339494       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.339605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.340940       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.341044       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:06.733652    4316 command_runner.go:130] ! I0513 23:56:22.342309       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:06.734937    4316 command_runner.go:130] ! I0513 23:56:22.343675       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:06.734937    4316 command_runner.go:130] ! I0513 23:56:22.343828       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:06.734937    4316 command_runner.go:130] ! I0513 23:56:22.347539       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:06.734991    4316 command_runner.go:130] ! I0513 23:56:22.357773       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:06.734991    4316 command_runner.go:130] ! I0513 23:56:22.361377       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:06.734991    4316 command_runner.go:130] ! I0513 23:56:22.372019       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:06.735029    4316 command_runner.go:130] ! I0513 23:56:22.380620       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:06.735029    4316 command_runner.go:130] ! I0513 23:56:22.382092       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:06.735066    4316 command_runner.go:130] ! I0513 23:56:22.382250       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:06.735066    4316 command_runner.go:130] ! I0513 23:56:22.382979       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.384565       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.384604       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.384724       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.386009       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.386117       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.386299       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.389103       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:06.735109    4316 command_runner.go:130] ! I0513 23:56:22.390596       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:06.735228    4316 command_runner.go:130] ! I0513 23:56:22.391278       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:06.735228    4316 command_runner.go:130] ! I0513 23:56:22.391538       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:06.735228    4316 command_runner.go:130] ! I0513 23:56:22.391663       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:06.735228    4316 command_runner.go:130] ! I0513 23:56:22.392031       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:06.735228    4316 command_runner.go:130] ! I0513 23:56:22.392207       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:06.735285    4316 command_runner.go:130] ! I0513 23:56:22.392242       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:06.735285    4316 command_runner.go:130] ! I0513 23:56:22.392249       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:06.735285    4316 command_runner.go:130] ! I0513 23:56:22.402105       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:06.735285    4316 command_runner.go:130] ! I0513 23:56:22.405500       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:06.735338    4316 command_runner.go:130] ! I0513 23:56:22.406927       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:06.735338    4316 command_runner.go:130] ! I0513 23:56:22.411111       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100" podCIDRs=["10.244.0.0/24"]
	I0514 00:18:06.735338    4316 command_runner.go:130] ! I0513 23:56:22.431075       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:06.735338    4316 command_runner.go:130] ! I0513 23:56:22.443663       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:06.735398    4316 command_runner.go:130] ! I0513 23:56:22.552382       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:06.735398    4316 command_runner.go:130] ! I0513 23:56:22.573274       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:06.735434    4316 command_runner.go:130] ! I0513 23:56:22.573442       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:06.735434    4316 command_runner.go:130] ! I0513 23:56:22.573935       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:06.735469    4316 command_runner.go:130] ! I0513 23:56:22.574179       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:06.735524    4316 command_runner.go:130] ! I0513 23:56:22.586849       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:06.735524    4316 command_runner.go:130] ! I0513 23:56:22.602574       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:06.735524    4316 command_runner.go:130] ! I0513 23:56:23.018846       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:06.735524    4316 command_runner.go:130] ! I0513 23:56:23.087540       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:06.735572    4316 command_runner.go:130] ! I0513 23:56:23.087631       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:06.735572    4316 command_runner.go:130] ! I0513 23:56:23.691681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="593.37356ms"
	I0514 00:18:06.735572    4316 command_runner.go:130] ! I0513 23:56:23.736584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.765409ms"
	I0514 00:18:06.735630    4316 command_runner.go:130] ! I0513 23:56:23.736691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.105µs"
	I0514 00:18:06.735630    4316 command_runner.go:130] ! I0513 23:56:23.741069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.307µs"
	I0514 00:18:06.735682    4316 command_runner.go:130] ! I0513 23:56:24.558346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.410112ms"
	I0514 00:18:06.735682    4316 command_runner.go:130] ! I0513 23:56:24.599621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.388659ms"
	I0514 00:18:06.735682    4316 command_runner.go:130] ! I0513 23:56:24.599778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.705µs"
	I0514 00:18:06.735742    4316 command_runner.go:130] ! I0513 23:56:35.460855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.604µs"
	I0514 00:18:06.735742    4316 command_runner.go:130] ! I0513 23:56:35.495875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.404µs"
	I0514 00:18:06.735793    4316 command_runner.go:130] ! I0513 23:56:36.868700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.505µs"
	I0514 00:18:06.735793    4316 command_runner.go:130] ! I0513 23:56:36.916603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.935352ms"
	I0514 00:18:06.735793    4316 command_runner.go:130] ! I0513 23:56:36.917123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.803µs"
	I0514 00:18:06.735846    4316 command_runner.go:130] ! I0513 23:56:37.577172       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:06.735846    4316 command_runner.go:130] ! I0513 23:59:02.230067       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:06.735896    4316 command_runner.go:130] ! I0513 23:59:02.246355       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m02" podCIDRs=["10.244.1.0/24"]
	I0514 00:18:06.735896    4316 command_runner.go:130] ! I0513 23:59:02.603699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:06.735896    4316 command_runner.go:130] ! I0513 23:59:22.527169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.735953    4316 command_runner.go:130] ! I0513 23:59:45.791856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.887671ms"
	I0514 00:18:06.735953    4316 command_runner.go:130] ! I0513 23:59:45.808219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.096894ms"
	I0514 00:18:06.736003    4316 command_runner.go:130] ! I0513 23:59:45.808747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.005µs"
	I0514 00:18:06.736003    4316 command_runner.go:130] ! I0513 23:59:45.809833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.705µs"
	I0514 00:18:06.736003    4316 command_runner.go:130] ! I0513 23:59:45.811263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.604µs"
	I0514 00:18:06.736059    4316 command_runner.go:130] ! I0513 23:59:48.526617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.926472ms"
	I0514 00:18:06.736059    4316 command_runner.go:130] ! I0513 23:59:48.529326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.302µs"
	I0514 00:18:06.736059    4316 command_runner.go:130] ! I0513 23:59:48.555529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.972453ms"
	I0514 00:18:06.736111    4316 command_runner.go:130] ! I0513 23:59:48.556317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.601µs"
	I0514 00:18:06.736111    4316 command_runner.go:130] ! I0514 00:03:17.563212       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736165    4316 command_runner.go:130] ! I0514 00:03:17.565297       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:06.736165    4316 command_runner.go:130] ! I0514 00:03:17.579900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.2.0/24"]
	I0514 00:18:06.736216    4316 command_runner.go:130] ! I0514 00:03:17.665892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:06.736216    4316 command_runner.go:130] ! I0514 00:03:38.035898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736322    4316 command_runner.go:130] ! I0514 00:10:17.797191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736378    4316 command_runner.go:130] ! I0514 00:12:39.070271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736378    4316 command_runner.go:130] ! I0514 00:12:44.527915       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736426    4316 command_runner.go:130] ! I0514 00:12:44.528275       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:06.736426    4316 command_runner.go:130] ! I0514 00:12:44.543895       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.3.0/24"]
	I0514 00:18:06.736426    4316 command_runner.go:130] ! I0514 00:12:49.983419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736481    4316 command_runner.go:130] ! I0514 00:14:17.920991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:06.736481    4316 command_runner.go:130] ! I0514 00:14:33.013074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.740609ms"
	I0514 00:18:06.736481    4316 command_runner.go:130] ! I0514 00:14:33.013918       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.506µs"
	I0514 00:18:06.752395    4316 logs.go:123] Gathering logs for coredns [76c5ab7859ef] ...
	I0514 00:18:06.752395    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5ab7859ef"
	I0514 00:18:06.775995    4316 command_runner.go:130] > .:53
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:06.776994    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:06.776994    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 127.0.0.1:57161 - 45698 "HINFO IN 8990392176501838712.5889638972791529478. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051692136s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:55099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211505s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:55878 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185519855s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:33619 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15684109s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:49440 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.197645067s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:50960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430608s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:46839 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000167103s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:55330 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000155803s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:50874 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000131802s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:53724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096802s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.042707366s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:54429 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000269706s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:48558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000262605s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:46986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023487677s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:60460 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174903s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:60672 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204304s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:36311 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110402s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:43910 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301006s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:52495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000145803s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:46357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066702s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:41390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062301s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:35739 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084301s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:44800 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163303s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:57631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068702s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:50842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135702s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:41210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204604s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:57858 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073801s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:48782 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152303s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:36081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121002s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:46909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115002s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:36030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220205s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:56187 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059401s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:51500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099802s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:57247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147903s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:46132 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170203s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:57206 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000452309s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.1.2:44795 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000146203s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:33385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082102s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:56742 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173704s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:46927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185904s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] 10.244.0.3:42956 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000054801s
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0514 00:18:06.776994    4316 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0514 00:18:06.780954    4316 logs.go:123] Gathering logs for kube-scheduler [d3581c1c570c] ...
	I0514 00:18:06.781477    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3581c1c570c"
	I0514 00:18:06.802398    4316 command_runner.go:130] ! I0514 00:16:52.716401       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:06.802398    4316 command_runner.go:130] ! W0514 00:16:54.858727       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:06.803479    4316 command_runner.go:130] ! W0514 00:16:54.858778       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:06.803611    4316 command_runner.go:130] ! W0514 00:16:54.858790       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:06.803611    4316 command_runner.go:130] ! W0514 00:16:54.858800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:06.803679    4316 command_runner.go:130] ! I0514 00:16:54.945438       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:06.803741    4316 command_runner.go:130] ! I0514 00:16:54.945867       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.803787    4316 command_runner.go:130] ! I0514 00:16:54.953986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:06.803787    4316 command_runner.go:130] ! I0514 00:16:54.957180       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:06.803787    4316 command_runner.go:130] ! I0514 00:16:54.957284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:06.803867    4316 command_runner.go:130] ! I0514 00:16:54.957493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:06.803895    4316 command_runner.go:130] ! I0514 00:16:55.058381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:06.807563    4316 logs.go:123] Gathering logs for kube-proxy [b2a1b31cd7de] ...
	I0514 00:18:06.807626    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a1b31cd7de"
	I0514 00:18:06.831122    4316 command_runner.go:130] ! I0514 00:16:57.528613       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:06.831122    4316 command_runner.go:130] ! I0514 00:16:57.562847       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.122"]
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.701301       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.701447       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.701476       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.708219       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.708800       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.708841       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.712422       1 config.go:192] "Starting service config controller"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.712733       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.712780       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.712824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.715339       1 config.go:319] "Starting node config controller"
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.717651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.815732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.815811       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:06.831208    4316 command_runner.go:130] ! I0514 00:16:57.818050       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:06.832666    4316 logs.go:123] Gathering logs for kindnet [b7d8d9a5e5ea] ...
	I0514 00:18:06.832699    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d8d9a5e5ea"
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:16:57.751233       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:16:57.751585       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:06.854234    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:16:57.752181       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:16:57.752200       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:16:57.752221       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:17:01.123977       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:17:04.195495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:17:07.267636       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.854234    4316 command_runner.go:130] ! I0514 00:17:10.339619       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.855220    4316 command_runner.go:130] ! I0514 00:17:13.411801       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.855220    4316 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:06.855220    4316 command_runner.go:130] ! goroutine 1 [running]:
	I0514 00:18:06.855220    4316 command_runner.go:130] ! main.main()
	I0514 00:18:06.855220    4316 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0514 00:18:06.861781    4316 logs.go:123] Gathering logs for dmesg ...
	I0514 00:18:06.862563    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0514 00:18:06.883290    4316 command_runner.go:130] > [May14 00:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.104207] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.023601] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.058832] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.024495] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0514 00:18:06.884306    4316 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +5.692465] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.707713] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +1.789899] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +7.282690] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0514 00:18:06.884306    4316 command_runner.go:130] > [May14 00:16] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.158382] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [ +23.750429] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.111929] kauditd_printk_skb: 73 callbacks suppressed
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.464883] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.164872] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.194348] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +2.832176] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.181315] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.160798] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.238904] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.787359] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +0.085936] kauditd_printk_skb: 205 callbacks suppressed
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +3.384697] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +1.802132] kauditd_printk_skb: 64 callbacks suppressed
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +5.213940] kauditd_printk_skb: 10 callbacks suppressed
	I0514 00:18:06.884306    4316 command_runner.go:130] > [  +3.471694] systemd-fstab-generator[2315]: Ignoring "noauto" option for root device
	I0514 00:18:06.884306    4316 command_runner.go:130] > [May14 00:17] kauditd_printk_skb: 70 callbacks suppressed
	I0514 00:18:06.886287    4316 logs.go:123] Gathering logs for etcd [08450c853590] ...
	I0514 00:18:06.886287    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08450c853590"
	I0514 00:18:06.911840    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.687231Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:06.912015    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.691397Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.102.122:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.102.122:2380","--initial-cluster=multinode-101100=https://172.23.102.122:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.102.122:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.102.122:2380","--name=multinode-101100","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0514 00:18:06.912090    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.692425Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0514 00:18:06.912158    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.693634Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:06.912158    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.693771Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.102.122:2380"]}
	I0514 00:18:06.912225    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.694117Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:06.912314    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.703219Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"]}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.704312Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-101100","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.7264Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.905879ms"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.748539Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.766395Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","commit-index":1898}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=()"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became follower at term 2"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.768086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6e4c15c3d0f3380f [peers: [], term: 2, commit: 1898, applied: 0, lastindex: 1898, lastterm: 2]"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.782157Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.786938Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1096}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.797876Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1653}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.80426Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0514 00:18:06.912489    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.81216Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6e4c15c3d0f3380f","timeout":"7s"}
	I0514 00:18:06.913013    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.813213Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6e4c15c3d0f3380f"}
	I0514 00:18:06.913054    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.814234Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"6e4c15c3d0f3380f","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.815302Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816695Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816877Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=(7947751373170489359)"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817687Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","added-peer-id":"6e4c15c3d0f3380f","added-peer-peer-urls":["https://172.23.106.39:2380"]}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","cluster-version":"3.5"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.818648Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.83299Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.834951Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6e4c15c3d0f3380f","initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0514 00:18:06.913079    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835138Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0514 00:18:06.913599    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835469Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.102.122:2380"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835603Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.102.122:2380"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.468953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f is starting a new election at term 2"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became pre-candidate at term 2"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgPreVoteResp from 6e4c15c3d0f3380f at term 2"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became candidate at term 3"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgVoteResp from 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became leader at term 3"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6e4c15c3d0f3380f elected leader 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479025Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6e4c15c3d0f3380f","local-member-attributes":"{Name:multinode-101100 ClientURLs:[https://172.23.102.122:2379]}","request-path":"/0/members/6e4c15c3d0f3380f/attributes","cluster-id":"bb849d1df0b559d7","publish-timeout":"7s"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.102.122:2379"}
	I0514 00:18:06.913662    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0514 00:18:06.919879    4316 logs.go:123] Gathering logs for coredns [dcc5a109288b] ...
	I0514 00:18:06.919879    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc5a109288b"
	I0514 00:18:06.946346    4316 command_runner.go:130] > .:53
	I0514 00:18:06.946346    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:06.946346    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:06.946346    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:06.947333    4316 command_runner.go:130] > [INFO] 127.0.0.1:53257 - 27032 "HINFO IN 6976640239659908905.245956973392320689. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05278328s
	I0514 00:18:06.947333    4316 logs.go:123] Gathering logs for container status ...
	I0514 00:18:06.947333    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0514 00:18:07.001003    4316 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0514 00:18:07.001003    4316 command_runner.go:130] > 3d0b2f0362eb4       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   8cb9b6d6d0915       busybox-fc5497c4f-xqj6w
	I0514 00:18:07.001003    4316 command_runner.go:130] > dcc5a109288b6       cbb01a7bd410d                                                                                         7 seconds ago        Running             coredns                   1                   1cccb5e8cee3b       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:07.001141    4316 command_runner.go:130] > bde84ba2d4ed7       6e38f40d628db                                                                                         28 seconds ago       Running             storage-provisioner       2                   468a0e2976ae4       storage-provisioner
	I0514 00:18:07.001191    4316 command_runner.go:130] > 2b424a7cd98c8       4950bb10b3f87                                                                                         40 seconds ago       Running             kindnet-cni               2                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:07.001261    4316 command_runner.go:130] > b7d8d9a5e5eaf       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:07.001261    4316 command_runner.go:130] > b142687b621f1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   468a0e2976ae4       storage-provisioner
	I0514 00:18:07.001361    4316 command_runner.go:130] > b2a1b31cd7dee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   a8ac60a565998       kube-proxy-zhcz6
	I0514 00:18:07.001409    4316 command_runner.go:130] > 08450c853590d       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   419648c0d4053       etcd-multinode-101100
	I0514 00:18:07.001472    4316 command_runner.go:130] > da9e6534cd87d       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   509b8407e0955       kube-apiserver-multinode-101100
	I0514 00:18:07.001472    4316 command_runner.go:130] > d3581c1c570cf       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   ddcaadef980ac       kube-scheduler-multinode-101100
	I0514 00:18:07.001598    4316 command_runner.go:130] > b87239d1199ab       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   659643d47b9ae       kube-controller-manager-multinode-101100
	I0514 00:18:07.001669    4316 command_runner.go:130] > 57dea5416eb67       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago       Exited              busybox                   0                   76d1b8ce19aba       busybox-fc5497c4f-xqj6w
	I0514 00:18:07.001736    4316 command_runner.go:130] > 76c5ab7859eff       cbb01a7bd410d                                                                                         21 minutes ago       Exited              coredns                   0                   8bb49b28c842a       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:07.001736    4316 command_runner.go:130] > 91edaaa00da23       a0bf559e280cf                                                                                         21 minutes ago       Exited              kube-proxy                0                   9bd694480978f       kube-proxy-zhcz6
	I0514 00:18:07.001803    4316 command_runner.go:130] > e96f94398d6dd       c7aad43836fa5                                                                                         22 minutes ago       Exited              kube-controller-manager   0                   da9268fd6556b       kube-controller-manager-multinode-101100
	I0514 00:18:07.001908    4316 command_runner.go:130] > 964887fc5d362       259c8277fcbbc                                                                                         22 minutes ago       Exited              kube-scheduler            0                   fcb3b27edcd2a       kube-scheduler-multinode-101100
	I0514 00:18:07.005539    4316 logs.go:123] Gathering logs for kube-apiserver [da9e6534cd87] ...
	I0514 00:18:07.005539    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e6534cd87"
	I0514 00:18:07.029007    4316 command_runner.go:130] ! I0514 00:16:52.020111       1 options.go:221] external host was not specified, using 172.23.102.122
	I0514 00:18:07.037607    4316 command_runner.go:130] ! I0514 00:16:52.031119       1 server.go:148] Version: v1.30.0
	I0514 00:18:07.037607    4316 command_runner.go:130] ! I0514 00:16:52.031201       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:07.037607    4316 command_runner.go:130] ! I0514 00:16:52.560170       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0514 00:18:07.037734    4316 command_runner.go:130] ! I0514 00:16:52.562027       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0514 00:18:07.037734    4316 command_runner.go:130] ! I0514 00:16:52.567323       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0514 00:18:07.037986    4316 command_runner.go:130] ! I0514 00:16:52.562214       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:07.038048    4316 command_runner.go:130] ! I0514 00:16:52.570134       1 instance.go:299] Using reconciler: lease
	I0514 00:18:07.038048    4316 command_runner.go:130] ! I0514 00:16:53.544464       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0514 00:18:07.038048    4316 command_runner.go:130] ! W0514 00:16:53.544866       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038114    4316 command_runner.go:130] ! I0514 00:16:53.780904       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0514 00:18:07.038114    4316 command_runner.go:130] ! I0514 00:16:53.781233       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0514 00:18:07.038114    4316 command_runner.go:130] ! I0514 00:16:54.015006       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0514 00:18:07.038114    4316 command_runner.go:130] ! I0514 00:16:54.172205       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0514 00:18:07.038185    4316 command_runner.go:130] ! I0514 00:16:54.186014       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0514 00:18:07.038185    4316 command_runner.go:130] ! W0514 00:16:54.186188       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038185    4316 command_runner.go:130] ! W0514 00:16:54.186609       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038252    4316 command_runner.go:130] ! I0514 00:16:54.187573       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0514 00:18:07.038252    4316 command_runner.go:130] ! W0514 00:16:54.187695       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038252    4316 command_runner.go:130] ! I0514 00:16:54.188811       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0514 00:18:07.038322    4316 command_runner.go:130] ! I0514 00:16:54.190200       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0514 00:18:07.038322    4316 command_runner.go:130] ! W0514 00:16:54.190309       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0514 00:18:07.038322    4316 command_runner.go:130] ! W0514 00:16:54.190366       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0514 00:18:07.038322    4316 command_runner.go:130] ! I0514 00:16:54.192283       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0514 00:18:07.038389    4316 command_runner.go:130] ! W0514 00:16:54.192583       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0514 00:18:07.038389    4316 command_runner.go:130] ! I0514 00:16:54.193726       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0514 00:18:07.038389    4316 command_runner.go:130] ! W0514 00:16:54.193833       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038450    4316 command_runner.go:130] ! W0514 00:16:54.193842       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038450    4316 command_runner.go:130] ! I0514 00:16:54.194656       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0514 00:18:07.038450    4316 command_runner.go:130] ! W0514 00:16:54.194769       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038516    4316 command_runner.go:130] ! W0514 00:16:54.194831       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038516    4316 command_runner.go:130] ! I0514 00:16:54.195773       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0514 00:18:07.038516    4316 command_runner.go:130] ! I0514 00:16:54.200522       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0514 00:18:07.038585    4316 command_runner.go:130] ! W0514 00:16:54.200808       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038585    4316 command_runner.go:130] ! W0514 00:16:54.201073       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038585    4316 command_runner.go:130] ! I0514 00:16:54.202173       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0514 00:18:07.038649    4316 command_runner.go:130] ! W0514 00:16:54.202352       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038649    4316 command_runner.go:130] ! W0514 00:16:54.202465       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038649    4316 command_runner.go:130] ! I0514 00:16:54.204036       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0514 00:18:07.038719    4316 command_runner.go:130] ! W0514 00:16:54.204232       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0514 00:18:07.038719    4316 command_runner.go:130] ! I0514 00:16:54.213708       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0514 00:18:07.038719    4316 command_runner.go:130] ! W0514 00:16:54.213869       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038784    4316 command_runner.go:130] ! W0514 00:16:54.213992       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038784    4316 command_runner.go:130] ! I0514 00:16:54.214976       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0514 00:18:07.038784    4316 command_runner.go:130] ! W0514 00:16:54.215217       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038784    4316 command_runner.go:130] ! W0514 00:16:54.215317       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038852    4316 command_runner.go:130] ! I0514 00:16:54.226860       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0514 00:18:07.038852    4316 command_runner.go:130] ! W0514 00:16:54.227134       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038852    4316 command_runner.go:130] ! W0514 00:16:54.227258       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.038917    4316 command_runner.go:130] ! I0514 00:16:54.230259       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0514 00:18:07.038917    4316 command_runner.go:130] ! I0514 00:16:54.232567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0514 00:18:07.038917    4316 command_runner.go:130] ! W0514 00:16:54.232734       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0514 00:18:07.038917    4316 command_runner.go:130] ! W0514 00:16:54.232824       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.038986    4316 command_runner.go:130] ! I0514 00:16:54.239186       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0514 00:18:07.038986    4316 command_runner.go:130] ! W0514 00:16:54.239294       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0514 00:18:07.038986    4316 command_runner.go:130] ! W0514 00:16:54.239304       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0514 00:18:07.038986    4316 command_runner.go:130] ! I0514 00:16:54.241605       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0514 00:18:07.039071    4316 command_runner.go:130] ! W0514 00:16:54.241703       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.039071    4316 command_runner.go:130] ! W0514 00:16:54.241712       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:07.039071    4316 command_runner.go:130] ! I0514 00:16:54.242373       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0514 00:18:07.039126    4316 command_runner.go:130] ! W0514 00:16:54.242466       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.039126    4316 command_runner.go:130] ! I0514 00:16:54.259244       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0514 00:18:07.039186    4316 command_runner.go:130] ! W0514 00:16:54.259536       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:07.039186    4316 command_runner.go:130] ! I0514 00:16:54.792225       1 secure_serving.go:213] Serving securely on [::]:8443
	I0514 00:18:07.039186    4316 command_runner.go:130] ! I0514 00:16:54.792432       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:07.039250    4316 command_runner.go:130] ! I0514 00:16:54.794552       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0514 00:18:07.039311    4316 command_runner.go:130] ! I0514 00:16:54.794677       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0514 00:18:07.039311    4316 command_runner.go:130] ! I0514 00:16:54.794720       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0514 00:18:07.039374    4316 command_runner.go:130] ! I0514 00:16:54.795157       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0514 00:18:07.039374    4316 command_runner.go:130] ! I0514 00:16:54.795787       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0514 00:18:07.039374    4316 command_runner.go:130] ! I0514 00:16:54.795995       1 controller.go:116] Starting legacy_token_tracking_controller
	I0514 00:18:07.039374    4316 command_runner.go:130] ! I0514 00:16:54.796042       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0514 00:18:07.039445    4316 command_runner.go:130] ! I0514 00:16:54.796156       1 controller.go:78] Starting OpenAPI AggregationController
	I0514 00:18:07.039445    4316 command_runner.go:130] ! I0514 00:16:54.796272       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0514 00:18:07.039445    4316 command_runner.go:130] ! I0514 00:16:54.797969       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0514 00:18:07.039511    4316 command_runner.go:130] ! I0514 00:16:54.798688       1 available_controller.go:423] Starting AvailableConditionController
	I0514 00:18:07.039511    4316 command_runner.go:130] ! I0514 00:16:54.798701       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0514 00:18:07.039511    4316 command_runner.go:130] ! I0514 00:16:54.799424       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0514 00:18:07.039572    4316 command_runner.go:130] ! I0514 00:16:54.799667       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0514 00:18:07.039572    4316 command_runner.go:130] ! I0514 00:16:54.799692       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0514 00:18:07.039572    4316 command_runner.go:130] ! I0514 00:16:54.800971       1 aggregator.go:163] waiting for initial CRD sync...
	I0514 00:18:07.039634    4316 command_runner.go:130] ! I0514 00:16:54.792447       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:07.039634    4316 command_runner.go:130] ! I0514 00:16:54.792459       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0514 00:18:07.039694    4316 command_runner.go:130] ! I0514 00:16:54.792473       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:07.039694    4316 command_runner.go:130] ! I0514 00:16:54.812587       1 controller.go:139] Starting OpenAPI controller
	I0514 00:18:07.039694    4316 command_runner.go:130] ! I0514 00:16:54.812611       1 controller.go:87] Starting OpenAPI V3 controller
	I0514 00:18:07.039694    4316 command_runner.go:130] ! I0514 00:16:54.812626       1 naming_controller.go:291] Starting NamingConditionController
	I0514 00:18:07.039757    4316 command_runner.go:130] ! I0514 00:16:54.812640       1 establishing_controller.go:76] Starting EstablishingController
	I0514 00:18:07.039757    4316 command_runner.go:130] ! I0514 00:16:54.812660       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0514 00:18:07.039757    4316 command_runner.go:130] ! I0514 00:16:54.812674       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0514 00:18:07.039817    4316 command_runner.go:130] ! I0514 00:16:54.812685       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0514 00:18:07.039817    4316 command_runner.go:130] ! I0514 00:16:54.848957       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:07.039817    4316 command_runner.go:130] ! I0514 00:16:54.849152       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:07.039879    4316 command_runner.go:130] ! I0514 00:16:54.850275       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0514 00:18:07.039879    4316 command_runner.go:130] ! I0514 00:16:54.850299       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0514 00:18:07.039879    4316 command_runner.go:130] ! I0514 00:16:54.906495       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0514 00:18:07.039939    4316 command_runner.go:130] ! I0514 00:16:54.938841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 00:18:07.039939    4316 command_runner.go:130] ! I0514 00:16:54.950730       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0514 00:18:07.039939    4316 command_runner.go:130] ! I0514 00:16:54.950897       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0514 00:18:07.039939    4316 command_runner.go:130] ! I0514 00:16:54.951294       1 aggregator.go:165] initial CRD sync complete...
	I0514 00:18:07.040002    4316 command_runner.go:130] ! I0514 00:16:54.951545       1 autoregister_controller.go:141] Starting autoregister controller
	I0514 00:18:07.040002    4316 command_runner.go:130] ! I0514 00:16:54.951793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0514 00:18:07.040002    4316 command_runner.go:130] ! I0514 00:16:54.951875       1 cache.go:39] Caches are synced for autoregister controller
	I0514 00:18:07.040063    4316 command_runner.go:130] ! I0514 00:16:54.962299       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0514 00:18:07.040063    4316 command_runner.go:130] ! I0514 00:16:54.968027       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:07.040063    4316 command_runner.go:130] ! I0514 00:16:54.968302       1 policy_source.go:224] refreshing policies
	I0514 00:18:07.040127    4316 command_runner.go:130] ! I0514 00:16:54.997391       1 shared_informer.go:320] Caches are synced for configmaps
	I0514 00:18:07.040127    4316 command_runner.go:130] ! I0514 00:16:54.999391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 00:18:07.040127    4316 command_runner.go:130] ! I0514 00:16:54.999732       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0514 00:18:07.040187    4316 command_runner.go:130] ! I0514 00:16:54.999871       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0514 00:18:07.040187    4316 command_runner.go:130] ! I0514 00:16:55.037244       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 00:18:07.040187    4316 command_runner.go:130] ! I0514 00:16:55.824524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0514 00:18:07.040246    4316 command_runner.go:130] ! W0514 00:16:56.521956       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122 172.23.106.39]
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:56.523614       1 controller.go:615] quota admission added evaluator for: endpoints
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:56.536716       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:57.861026       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:58.068043       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:58.085925       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:58.189328       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 00:18:07.040371    4316 command_runner.go:130] ! I0514 00:16:58.200849       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0514 00:18:07.040371    4316 command_runner.go:130] ! W0514 00:17:16.528300       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122]
	I0514 00:18:07.050652    4316 logs.go:123] Gathering logs for kube-controller-manager [b87239d1199a] ...
	I0514 00:18:07.051189    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87239d1199a"
	I0514 00:18:07.074502    4316 command_runner.go:130] ! I0514 00:16:52.414723       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:07.074502    4316 command_runner.go:130] ! I0514 00:16:52.798318       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:07.075367    4316 command_runner.go:130] ! I0514 00:16:52.798456       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:07.075367    4316 command_runner.go:130] ! I0514 00:16:52.802364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:07.075367    4316 command_runner.go:130] ! I0514 00:16:52.802939       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:07.075367    4316 command_runner.go:130] ! I0514 00:16:52.803159       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:52.803510       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.867503       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.868219       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.874269       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.878308       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.878330       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.878409       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.878509       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.878517       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.882632       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.882648       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.882656       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.883478       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.883488       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.883496       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.885766       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.888273       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.888463       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.889304       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.890244       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.890408       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.893619       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.903162       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.903183       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.969340       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.982656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:07.075455    4316 command_runner.go:130] ! I0514 00:16:56.982729       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:07.075989    4316 command_runner.go:130] ! I0514 00:16:56.983268       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:07.075989    4316 command_runner.go:130] ! I0514 00:16:56.983299       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:07.075989    4316 command_runner.go:130] ! I0514 00:16:56.983354       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:07.076111    4316 command_runner.go:130] ! I0514 00:16:56.983426       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:07.076111    4316 command_runner.go:130] ! I0514 00:16:56.983451       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:07.076111    4316 command_runner.go:130] ! W0514 00:16:56.983466       1 shared_informer.go:597] resyncPeriod 15h46m20.096782659s is smaller than resyncCheckPeriod 18h37m10.298700604s and the informer has already started. Changing it to 18h37m10.298700604s
	I0514 00:18:07.076111    4316 command_runner.go:130] ! I0514 00:16:56.983922       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:07.076226    4316 command_runner.go:130] ! I0514 00:16:56.984377       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:07.076226    4316 command_runner.go:130] ! I0514 00:16:56.984435       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:07.076226    4316 command_runner.go:130] ! I0514 00:16:56.984460       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:07.076299    4316 command_runner.go:130] ! I0514 00:16:56.984478       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:07.076299    4316 command_runner.go:130] ! I0514 00:16:56.984528       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:07.076377    4316 command_runner.go:130] ! I0514 00:16:56.984568       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:07.076377    4316 command_runner.go:130] ! I0514 00:16:56.984736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:07.076377    4316 command_runner.go:130] ! I0514 00:16:56.985288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:07.076473    4316 command_runner.go:130] ! I0514 00:16:56.995607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:07.076506    4316 command_runner.go:130] ! I0514 00:16:56.996188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:07.076538    4316 command_runner.go:130] ! I0514 00:16:56.997004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:07.076577    4316 command_runner.go:130] ! I0514 00:16:56.997141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:07.076627    4316 command_runner.go:130] ! I0514 00:16:56.997174       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:07.076627    4316 command_runner.go:130] ! I0514 00:16:56.997363       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:07.076669    4316 command_runner.go:130] ! I0514 00:16:56.997373       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:07.076669    4316 command_runner.go:130] ! I0514 00:16:57.003479       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:07.076669    4316 command_runner.go:130] ! I0514 00:16:57.004086       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:07.076739    4316 command_runner.go:130] ! I0514 00:16:57.004336       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:07.076739    4316 command_runner.go:130] ! I0514 00:16:57.004348       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:07.076812    4316 command_runner.go:130] ! I0514 00:17:07.031733       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:07.076812    4316 command_runner.go:130] ! I0514 00:17:07.032143       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:07.076812    4316 command_runner.go:130] ! I0514 00:17:07.032242       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:07.076812    4316 command_runner.go:130] ! I0514 00:17:07.032648       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:07.076911    4316 command_runner.go:130] ! I0514 00:17:07.034995       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:07.076911    4316 command_runner.go:130] ! I0514 00:17:07.035109       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:07.076996    4316 command_runner.go:130] ! I0514 00:17:07.035510       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.035544       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.035551       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.038183       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.038394       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.039212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.040784       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.041050       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.041194       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.043909       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.044044       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.044106       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.059101       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.059352       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.059503       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.062189       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.062615       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.062641       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.070971       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.071021       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.071151       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.071293       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.071328       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.071388       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.083342       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.084321       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.084474       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:07.077034    4316 command_runner.go:130] ! I0514 00:17:07.085952       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:07.077570    4316 command_runner.go:130] ! I0514 00:17:07.086347       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:07.077570    4316 command_runner.go:130] ! I0514 00:17:07.086569       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:07.077570    4316 command_runner.go:130] ! I0514 00:17:07.088414       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:07.077570    4316 command_runner.go:130] ! I0514 00:17:07.088731       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:07.077671    4316 command_runner.go:130] ! I0514 00:17:07.089444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:07.077671    4316 command_runner.go:130] ! I0514 00:17:07.091486       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:07.077671    4316 command_runner.go:130] ! I0514 00:17:07.091650       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:07.077671    4316 command_runner.go:130] ! I0514 00:17:07.091678       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:07.077754    4316 command_runner.go:130] ! I0514 00:17:07.094570       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:07.077754    4316 command_runner.go:130] ! I0514 00:17:07.095467       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:07.077754    4316 command_runner.go:130] ! I0514 00:17:07.095818       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:07.077835    4316 command_runner.go:130] ! I0514 00:17:07.097778       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:07.077835    4316 command_runner.go:130] ! I0514 00:17:07.098911       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:07.077835    4316 command_runner.go:130] ! I0514 00:17:07.098939       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:07.077835    4316 command_runner.go:130] ! I0514 00:17:07.100648       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:07.077835    4316 command_runner.go:130] ! I0514 00:17:07.101514       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:07.077909    4316 command_runner.go:130] ! I0514 00:17:07.101659       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:07.077909    4316 command_runner.go:130] ! I0514 00:17:07.103436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:07.077909    4316 command_runner.go:130] ! I0514 00:17:07.103908       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:07.077909    4316 command_runner.go:130] ! I0514 00:17:07.109194       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:07.077981    4316 command_runner.go:130] ! I0514 00:17:07.109267       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:07.077981    4316 command_runner.go:130] ! I0514 00:17:07.109496       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:07.077981    4316 command_runner.go:130] ! I0514 00:17:07.113760       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:07.078032    4316 command_runner.go:130] ! I0514 00:17:07.114024       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:07.078032    4316 command_runner.go:130] ! I0514 00:17:07.114252       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:07.078032    4316 command_runner.go:130] ! I0514 00:17:07.115259       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:07.078075    4316 command_runner.go:130] ! I0514 00:17:07.116925       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:07.078075    4316 command_runner.go:130] ! I0514 00:17:07.117254       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:07.078075    4316 command_runner.go:130] ! I0514 00:17:07.117353       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:07.078075    4316 command_runner.go:130] ! I0514 00:17:07.121368       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:07.078075    4316 command_runner.go:130] ! I0514 00:17:07.121764       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:07.078163    4316 command_runner.go:130] ! I0514 00:17:07.121788       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:07.078182    4316 command_runner.go:130] ! I0514 00:17:07.122128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:07.078182    4316 command_runner.go:130] ! I0514 00:17:07.122156       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:07.078182    4316 command_runner.go:130] ! I0514 00:17:07.122248       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:07.078266    4316 command_runner.go:130] ! I0514 00:17:07.122301       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:07.078266    4316 command_runner.go:130] ! I0514 00:17:07.122371       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:07.078317    4316 command_runner.go:130] ! I0514 00:17:07.122432       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:07.078317    4316 command_runner.go:130] ! I0514 00:17:07.122464       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:07.078317    4316 command_runner.go:130] ! I0514 00:17:07.122706       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:07.078369    4316 command_runner.go:130] ! I0514 00:17:07.123282       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:07.078369    4316 command_runner.go:130] ! I0514 00:17:07.123678       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:07.078369    4316 command_runner.go:130] ! I0514 00:17:07.126535       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:07.078369    4316 command_runner.go:130] ! I0514 00:17:07.126692       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:07.078369    4316 command_runner.go:130] ! E0514 00:17:07.165594       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:07.078369    4316 command_runner.go:130] ! I0514 00:17:07.165634       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:07.078463    4316 command_runner.go:130] ! I0514 00:17:07.218097       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:07.078463    4316 command_runner.go:130] ! I0514 00:17:07.218271       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.218379       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.218721       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.265917       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.266033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.266045       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.315398       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.315511       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.315534       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.415899       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.416022       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.465981       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.466026       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.466177       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.466545       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.516337       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.516498       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.516515       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.567477       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.567616       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.567627       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.617346       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.617464       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.617476       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:07.078517    4316 command_runner.go:130] ! E0514 00:17:07.665765       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.665865       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.665876       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.671623       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.693623       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.703208       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.707002       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.707898       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.708010       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.708168       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.710800       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.710879       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.716140       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.716709       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:07.078517    4316 command_runner.go:130] ! I0514 00:17:07.717695       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.717710       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.718924       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.723267       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.723378       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.723467       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.723495       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.726980       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.733271       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.733445       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.733467       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:07.079039    4316 command_runner.go:130] ! I0514 00:17:07.733473       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:07.079168    4316 command_runner.go:130] ! I0514 00:17:07.733480       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:07.079168    4316 command_runner.go:130] ! I0514 00:17:07.739996       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:07.079168    4316 command_runner.go:130] ! I0514 00:17:07.742032       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.744959       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.760453       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.762790       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.766175       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.767750       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.768151       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:07.079205    4316 command_runner.go:130] ! I0514 00:17:07.779225       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.779406       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.784902       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.787441       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.790178       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.791571       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.797318       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.816750       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.836762       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:07.079305    4316 command_runner.go:130] ! I0514 00:17:07.837127       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:07.079413    4316 command_runner.go:130] ! I0514 00:17:07.869081       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:07.079413    4316 command_runner.go:130] ! I0514 00:17:07.869544       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:07.079808    4316 command_runner.go:130] ! I0514 00:17:07.869413       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:07.079808    4316 command_runner.go:130] ! I0514 00:17:07.870789       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:07.079808    4316 command_runner.go:130] ! I0514 00:17:07.898670       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:07.079881    4316 command_runner.go:130] ! I0514 00:17:07.901033       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:07.079881    4316 command_runner.go:130] ! I0514 00:17:07.904366       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:07.079911    4316 command_runner.go:130] ! I0514 00:17:07.916125       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:07.079911    4316 command_runner.go:130] ! I0514 00:17:07.977330       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:07.079950    4316 command_runner.go:130] ! I0514 00:17:07.988956       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:07.079950    4316 command_runner.go:130] ! I0514 00:17:08.134754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="230.307102ms"
	I0514 00:18:07.079950    4316 command_runner.go:130] ! I0514 00:17:08.134896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.6µs"
	I0514 00:18:07.079992    4316 command_runner.go:130] ! I0514 00:17:08.140785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="234.508146ms"
	I0514 00:18:07.080010    4316 command_runner.go:130] ! I0514 00:17:08.140977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.3µs"
	I0514 00:18:07.080010    4316 command_runner.go:130] ! I0514 00:17:08.412419       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:07.080010    4316 command_runner.go:130] ! I0514 00:17:08.472034       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:07.080010    4316 command_runner.go:130] ! I0514 00:17:08.472384       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:07.080099    4316 command_runner.go:130] ! I0514 00:17:37.878702       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:07.080099    4316 command_runner.go:130] ! I0514 00:18:01.608725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.75856ms"
	I0514 00:18:07.080124    4316 command_runner.go:130] ! I0514 00:18:01.608844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.702µs"
	I0514 00:18:07.080124    4316 command_runner.go:130] ! I0514 00:18:01.651304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.008µs"
	I0514 00:18:07.080124    4316 command_runner.go:130] ! I0514 00:18:01.710123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.783088ms"
	I0514 00:18:07.080185    4316 command_runner.go:130] ! I0514 00:18:01.711762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.302µs"
	I0514 00:18:07.093635    4316 logs.go:123] Gathering logs for Docker ...
	I0514 00:18:07.093635    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:07.123038    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123367    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:07.123367    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:07.123367    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:07.123367    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:07.123418    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:07.123418    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:07.123418    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:07.123418    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123418    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0514 00:18:07.123489    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123489    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:07.123489    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:07.123532    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:07.123532    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:07.123532    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:07.123532    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:07.123532    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:07.123610    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.123670    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:07.123670    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349024460Z" level=info msg="Starting up"
	I0514 00:18:07.123670    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349886331Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:07.123670    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.351031392Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0514 00:18:07.123739    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.380428255Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:07.123739    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407060046Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:07.123790    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407104860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:07.123790    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407157277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:07.123790    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407182685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.123861    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408093872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.123861    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408200005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.123924    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408421875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.123924    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408522107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.123978    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408552116Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:07.123978    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408565820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.123978    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409126597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.124030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409855027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.124030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412841968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.124091    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412982412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.124140    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413109352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.124140    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413195779Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:07.124140    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414192994Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:07.124140    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414303628Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:07.124140    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414321234Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:07.124237    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420644226Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:07.124237    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420793973Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:07.124237    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420815380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:07.124237    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420835086Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:07.124302    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420849391Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:07.124302    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421006640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:07.124340    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421303834Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:07.124340    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421395163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:07.124453    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421479890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:07.124453    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421494994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:07.124548    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421507198Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124586    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421523703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124622    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421540509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124622    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421554613Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124691    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421571518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124691    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421584022Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124691    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421594526Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124760    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421604629Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.124760    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421626336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124760    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421639040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124817    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421651344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124817    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421662947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124817    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421673350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124868    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421684554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124868    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421695257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124916    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421705961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124916    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421717564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124916    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421730268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124967    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421774782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.124967    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421787286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.125030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421797990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.125030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421811094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:07.125030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421828299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.125082    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421838703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.125082    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421849206Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:07.125082    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421898721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:07.125144    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421926330Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:07.125144    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421987549Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422004755Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422070276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422106987Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422118891Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422453196Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422571233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422619148Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422687970Z" level=info msg="containerd successfully booted in 0.044863s"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.404653025Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.578701970Z" level=info msg="Loading containers: start."
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.027152626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.105905244Z" level=info msg="Loading containers: done."
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.135340666Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.136139953Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.185948604Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.186071317Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 systemd[1]: Stopping Docker Application Container Engine...
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.988898314Z" level=info msg="Processing signal 'terminated'"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.989838579Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990583130Z" level=info msg="Daemon shutdown complete"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990661536Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990696238Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: docker.service: Deactivated successfully.
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: Stopped Docker Application Container Engine.
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.059729298Z" level=info msg="Starting up"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.060541955Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.061850245Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1055
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.092613476Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115368453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:07.125197    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115403155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:07.125735    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115435257Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:07.125735    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115450359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125787    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115473760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.125787    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115486261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125787    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115635771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.125849    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115738478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125849    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115756280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:07.125901    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115766280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125901    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115789882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125949    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.116031099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.125949    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119790059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.126002    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119888566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:07.126002    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120181886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:07.126050    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120287794Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:07.126050    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120385900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:07.126103    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120406702Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:07.126103    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120419603Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:07.126103    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120713023Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:07.126151    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120746825Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:07.126151    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120760126Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:07.126151    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120773227Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:07.126203    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120785328Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:07.126203    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120826831Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:07.126250    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120999543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:07.126250    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121054147Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:07.126250    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121092049Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:07.126303    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121102050Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:07.126303    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121115951Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126303    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121126152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126349    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121135052Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126349    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121145153Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121156354Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121165854Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121175255Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121184656Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121204657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121216358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121225759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121235159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121243960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121254361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121263161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121275762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121287763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121299564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121364668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121378369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121388070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121400871Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121421772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121432873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121442174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121474076Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121485477Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121493977Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121504178Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121548581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121558382Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121570783Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121732894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121765696Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:07.126401    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121795498Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:07.126936    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121808099Z" level=info msg="containerd successfully booted in 0.031442s"
	I0514 00:18:07.126936    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.110784113Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:07.126936    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.142577516Z" level=info msg="Loading containers: start."
	I0514 00:18:07.126986    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.405628939Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:07.126986    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.480865351Z" level=info msg="Loading containers: done."
	I0514 00:18:07.126986    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503621028Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:07.126986    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503703734Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:07.127051    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545253312Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:07.127051    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545312016Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:07.127051    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:07.127051    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:07.127102    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:07.127102    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:07.127102    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:07.127102    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0514 00:18:07.127166    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Loaded network plugin cni"
	I0514 00:18:07.127166    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0514 00:18:07.127166    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0514 00:18:07.127222    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0514 00:18:07.127222    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0514 00:18:07.127255    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start cri-dockerd grpc backend"
	I0514 00:18:07.127255    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0514 00:18:07.127291    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xqj6w_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2\""
	I0514 00:18:07.127358    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4kmx4_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697\""
	I0514 00:18:07.127397    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717439407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.127397    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717535614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.127432    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717551915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127465    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.718214261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127501    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720663031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.127501    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720923549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.127533    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721017455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721295774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783128658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.127668    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783344773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.127668    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783450280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127704    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783657895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127736    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816093342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.127772    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816151946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.127772    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816166547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127804    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816251853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.127840    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ddcaadef980aca40a7740fe7c59949c3cb803d9fb441eca155b02162f3422bb8/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.127872    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/659643d47b9ae231a8b97d9871cab6dfac5f6d06e647c919d14170832ee47683/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.127939    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.127939    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.127975    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.258935521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128013    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.259980593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128013    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260187008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128051    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260361520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128083    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272553064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128120    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272771779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128153    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272798781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128189    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272907589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128189    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314782590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128227    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314905098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128264    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314946601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128264    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.315263523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128302    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.385829312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128338    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386016625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128338    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386135333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128377    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386495758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128413    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:55Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0514 00:18:07.128446    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444453862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128481    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444531867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128481    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444549969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128520    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444647976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128557    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.461909471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128557    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462106685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128589    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462142187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128625    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462265196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128657    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492511091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128694    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492965923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128694    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493135035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128727    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493390352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128763    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8ac60a565998ca52581e38272f2fcdb5f7038023f93d728cd74f5b89f5593ed/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.128794    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/468a0e2976ae45a571a99afabfcd1329c76873e973179fe56cc9ef46e2533698/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.128839    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849392115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.128878    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849539826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.128921    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849623331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128959    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849861048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.128996    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857219658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129028    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857468675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129058    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857687390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129105    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.858016113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129140    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5233e076edceb93931d756579982e556959dfd31508760da215a8407dca14e56/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218178264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218325574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218348976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218459383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:17.430189771Z" level=info msg="ignoring event" container=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431460316Z" level=info msg="shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431869631Z" level=warning msg="cleaning up after shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.432007736Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:27.281698284Z" level=info msg="ignoring event" container=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.282877145Z" level=info msg="shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283000451Z" level=warning msg="cleaning up after shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283015352Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.098999177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099271791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099326694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099641511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.092603581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093732951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093768053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129185    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.095427255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129710    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235051362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129710    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235156269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129747    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235169170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129747    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235258576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235645702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235713507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235730808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235828014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743900500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743970305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.744406335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.745139484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808545660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808756974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808962988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.809189903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.129802    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130326    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130326    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130363    4316 command_runner.go:130] > May 14 00:18:06 multinode-101100 dockerd[1049]: 2024/05/14 00:18:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:06 multinode-101100 dockerd[1049]: 2024/05/14 00:18:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.130411    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:07.160832    4316 logs.go:123] Gathering logs for describe nodes ...
	I0514 00:18:07.160832    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0514 00:18:07.337736    4316 command_runner.go:130] > Name:               multinode-101100
	I0514 00:18:07.337736    4316 command_runner.go:130] > Roles:              control-plane
	I0514 00:18:07.337736    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_56_10_0700
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0514 00:18:07.337736    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:07.337736    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:07.337736    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:56:06 +0000
	I0514 00:18:07.337736    4316 command_runner.go:130] > Taints:             <none>
	I0514 00:18:07.337736    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:07.337736    4316 command_runner.go:130] > Lease:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100
	I0514 00:18:07.337736    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:07.337736    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:18:06 +0000
	I0514 00:18:07.337736    4316 command_runner.go:130] > Conditions:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0514 00:18:07.337736    4316 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0514 00:18:07.337736    4316 command_runner.go:130] >   MemoryPressure   False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0514 00:18:07.337736    4316 command_runner.go:130] >   DiskPressure     False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0514 00:18:07.337736    4316 command_runner.go:130] >   PIDPressure      False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Ready            True    Tue, 14 May 2024 00:17:35 +0000   Tue, 14 May 2024 00:17:35 +0000   KubeletReady                 kubelet is posting ready status
	I0514 00:18:07.337736    4316 command_runner.go:130] > Addresses:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   InternalIP:  172.23.102.122
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Hostname:    multinode-101100
	I0514 00:18:07.337736    4316 command_runner.go:130] > Capacity:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.337736    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.337736    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.337736    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.337736    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.337736    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.337736    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.337736    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.337736    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.337736    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.337736    4316 command_runner.go:130] > System Info:
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Machine ID:                 5110a322e7104904905e303a94b950b6
	I0514 00:18:07.337736    4316 command_runner.go:130] >   System UUID:                9b23fe4d-6d34-444b-8185-a84d51d23610
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Boot ID:                    2e73d191-2dbe-4055-a17d-cff8a9e53a15
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:07.337736    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:07.337736    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:07.338804    4316 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0514 00:18:07.338804    4316 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0514 00:18:07.338804    4316 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0514 00:18:07.338804    4316 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:07.338804    4316 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0514 00:18:07.338804    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-xqj6w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:07.338804    4316 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4kmx4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	I0514 00:18:07.338804    4316 command_runner.go:130] >   kube-system                 etcd-multinode-101100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	I0514 00:18:07.338804    4316 command_runner.go:130] >   kube-system                 kindnet-9q2tv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0514 00:18:07.338938    4316 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-101100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	I0514 00:18:07.338938    4316 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-101100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:07.338938    4316 command_runner.go:130] >   kube-system                 kube-proxy-zhcz6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:07.338938    4316 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-101100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:07.338938    4316 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:07.338938    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:07.338938    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:07.339051    4316 command_runner.go:130] >   Resource           Requests     Limits
	I0514 00:18:07.339051    4316 command_runner.go:130] >   --------           --------     ------
	I0514 00:18:07.339051    4316 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0514 00:18:07.339051    4316 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0514 00:18:07.339051    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:07.339051    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:07.339051    4316 command_runner.go:130] > Events:
	I0514 00:18:07.339051    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:07.339051    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:07.339051    4316 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0514 00:18:07.339051    4316 command_runner.go:130] >   Normal  Starting                 69s                kube-proxy       
	I0514 00:18:07.339051    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:07.339185    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.339185    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:07.339185    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.339185    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m                kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m                kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m                kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  Starting                 21m                kubelet          Starting kubelet.
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-101100 status is now: NodeReady
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0514 00:18:07.339255    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.339378    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  77s (x8 over 78s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:07.339378    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    77s (x8 over 78s)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.339378    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     77s (x7 over 78s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:07.339433    4316 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:07.339433    4316 command_runner.go:130] > Name:               multinode-101100-m02
	I0514 00:18:07.339433    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:07.339433    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:07.339433    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:07.339433    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:07.339433    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m02
	I0514 00:18:07.339525    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:07.339525    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:07.339525    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:07.339525    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:07.339592    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_59_02_0700
	I0514 00:18:07.339592    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:07.339592    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:07.339592    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:07.339592    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:07.339687    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:59:02 +0000
	I0514 00:18:07.339687    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:07.339687    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:07.339687    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:07.339687    4316 command_runner.go:130] > Lease:
	I0514 00:18:07.339687    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m02
	I0514 00:18:07.339687    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:07.339687    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:52 +0000
	I0514 00:18:07.339687    4316 command_runner.go:130] > Conditions:
	I0514 00:18:07.339687    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:07.339687    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:07.339827    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.339827    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.339827    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.339827    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.339827    4316 command_runner.go:130] > Addresses:
	I0514 00:18:07.339827    4316 command_runner.go:130] >   InternalIP:  172.23.109.58
	I0514 00:18:07.339827    4316 command_runner.go:130] >   Hostname:    multinode-101100-m02
	I0514 00:18:07.339827    4316 command_runner.go:130] > Capacity:
	I0514 00:18:07.339827    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.339827    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.339827    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.339943    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.339943    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.339943    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:07.339943    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.339943    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.339943    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.339943    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.339943    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.339943    4316 command_runner.go:130] > System Info:
	I0514 00:18:07.339943    4316 command_runner.go:130] >   Machine ID:                 8d348bb1bbc048f4b99c681873b42d63
	I0514 00:18:07.339943    4316 command_runner.go:130] >   System UUID:                4330851b-5248-f245-9378-5fc25e670b55
	I0514 00:18:07.339943    4316 command_runner.go:130] >   Boot ID:                    9f102be6-1468-4570-8696-97e5ce51649a
	I0514 00:18:07.339943    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:07.339943    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:07.339943    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:07.340067    4316 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0514 00:18:07.340067    4316 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0514 00:18:07.340067    4316 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:07.340067    4316 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0514 00:18:07.340067    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-q7442    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:07.340067    4316 command_runner.go:130] >   kube-system                 kindnet-2lwsm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	I0514 00:18:07.340067    4316 command_runner.go:130] >   kube-system                 kube-proxy-b25hq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0514 00:18:07.340067    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:07.340067    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:07.340067    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:07.340225    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:07.340225    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:07.340225    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:07.340277    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:07.340277    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:07.340277    4316 command_runner.go:130] > Events:
	I0514 00:18:07.340277    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:07.340277    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:07.340277    4316 command_runner.go:130] >   Normal  Starting                 18m                kube-proxy       
	I0514 00:18:07.340277    4316 command_runner.go:130] >   Normal  RegisteredNode           19m                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:07.340277    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	I0514 00:18:07.340277    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.340399    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	I0514 00:18:07.340399    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.340431    4316 command_runner.go:130] >   Normal  NodeReady                18m                kubelet          Node multinode-101100-m02 status is now: NodeReady
	I0514 00:18:07.340481    4316 command_runner.go:130] >   Normal  NodeNotReady             3m35s              node-controller  Node multinode-101100-m02 status is now: NodeNotReady
	I0514 00:18:07.340481    4316 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:07.340481    4316 command_runner.go:130] > Name:               multinode-101100-m03
	I0514 00:18:07.340545    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:07.340545    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:07.340545    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:07.340545    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:07.340545    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m03
	I0514 00:18:07.340545    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:07.340616    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:07.340616    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:07.340616    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:07.340616    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_14T00_12_45_0700
	I0514 00:18:07.340679    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:07.340679    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:07.340679    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:07.340747    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:07.340747    4316 command_runner.go:130] > CreationTimestamp:  Tue, 14 May 2024 00:12:44 +0000
	I0514 00:18:07.340747    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:07.340747    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:07.340747    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:07.340812    4316 command_runner.go:130] > Lease:
	I0514 00:18:07.340812    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m03
	I0514 00:18:07.340812    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:07.340812    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:36 +0000
	I0514 00:18:07.340812    4316 command_runner.go:130] > Conditions:
	I0514 00:18:07.340812    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:07.340882    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:07.340882    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.340945    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.340945    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.340945    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:07.340945    4316 command_runner.go:130] > Addresses:
	I0514 00:18:07.340945    4316 command_runner.go:130] >   InternalIP:  172.23.102.231
	I0514 00:18:07.340945    4316 command_runner.go:130] >   Hostname:    multinode-101100-m03
	I0514 00:18:07.341024    4316 command_runner.go:130] > Capacity:
	I0514 00:18:07.341024    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.341024    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.341024    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.341082    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.341082    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.341082    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:07.341082    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:07.341082    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:07.341082    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:07.341082    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:07.341152    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:07.341152    4316 command_runner.go:130] > System Info:
	I0514 00:18:07.341152    4316 command_runner.go:130] >   Machine ID:                 11c3fac528de4278b1dafef49e54ea09
	I0514 00:18:07.341152    4316 command_runner.go:130] >   System UUID:                0ee228e5-87a6-0549-9a8d-1747b73431ee
	I0514 00:18:07.341215    4316 command_runner.go:130] >   Boot ID:                    d5c1e04c-3081-4871-912e-a86507b8e24a
	I0514 00:18:07.341215    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:07.341215    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:07.341215    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:07.341215    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:07.341275    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:07.341275    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:07.341275    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:07.341275    4316 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0514 00:18:07.341275    4316 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0514 00:18:07.341275    4316 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0514 00:18:07.341341    4316 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:07.341341    4316 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0514 00:18:07.341341    4316 command_runner.go:130] >   kube-system                 kindnet-tfbt8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	I0514 00:18:07.341410    4316 command_runner.go:130] >   kube-system                 kube-proxy-8zsgn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	I0514 00:18:07.341410    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:07.341410    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:07.341410    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:07.341410    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:07.341475    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:07.341545    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:07.341545    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:07.341545    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:07.341545    4316 command_runner.go:130] > Events:
	I0514 00:18:07.341545    4316 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0514 00:18:07.341609    4316 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0514 00:18:07.341609    4316 command_runner.go:130] >   Normal  Starting                 5m19s                  kube-proxy       
	I0514 00:18:07.341609    4316 command_runner.go:130] >   Normal  Starting                 14m                    kube-proxy       
	I0514 00:18:07.341609    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.341680    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:07.341680    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.341680    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:07.341748    4316 command_runner.go:130] >   Normal  NodeReady                14m                    kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:07.341748    4316 command_runner.go:130] >   Normal  Starting                 5m23s                  kubelet          Starting kubelet.
	I0514 00:18:07.341792    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m23s (x2 over 5m23s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:07.341792    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m23s (x2 over 5m23s)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:07.341857    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m23s (x2 over 5m23s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:07.341857    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:07.341901    4316 command_runner.go:130] >   Normal  RegisteredNode           5m20s                  node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:07.341949    4316 command_runner.go:130] >   Normal  NodeReady                5m18s                  kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:07.341949    4316 command_runner.go:130] >   Normal  NodeNotReady             3m50s                  node-controller  Node multinode-101100-m03 status is now: NodeNotReady
	I0514 00:18:07.341990    4316 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:07.351337    4316 logs.go:123] Gathering logs for kube-proxy [91edaaa00da2] ...
	I0514 00:18:07.351337    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91edaaa00da2"
	I0514 00:18:07.381927    4316 command_runner.go:130] ! I0513 23:56:24.901713       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:07.382210    4316 command_runner.go:130] ! I0513 23:56:24.929714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.106.39"]
	I0514 00:18:07.382447    4316 command_runner.go:130] ! I0513 23:56:24.982680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:07.382447    4316 command_runner.go:130] ! I0513 23:56:24.982795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:07.382447    4316 command_runner.go:130] ! I0513 23:56:24.982816       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:07.382563    4316 command_runner.go:130] ! I0513 23:56:24.988669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:07.382630    4316 command_runner.go:130] ! I0513 23:56:24.989566       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:07.382697    4316 command_runner.go:130] ! I0513 23:56:24.989671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:07.382697    4316 command_runner.go:130] ! I0513 23:56:24.992700       1 config.go:192] "Starting service config controller"
	I0514 00:18:07.382697    4316 command_runner.go:130] ! I0513 23:56:24.993131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:07.382793    4316 command_runner.go:130] ! I0513 23:56:24.993327       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:07.382793    4316 command_runner.go:130] ! I0513 23:56:24.993339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:07.382793    4316 command_runner.go:130] ! I0513 23:56:24.994714       1 config.go:319] "Starting node config controller"
	I0514 00:18:07.382913    4316 command_runner.go:130] ! I0513 23:56:24.994744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:07.382913    4316 command_runner.go:130] ! I0513 23:56:25.094420       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:07.382913    4316 command_runner.go:130] ! I0513 23:56:25.094530       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:07.383027    4316 command_runner.go:130] ! I0513 23:56:25.094981       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:07.385779    4316 logs.go:123] Gathering logs for kindnet [2b424a7cd98c] ...
	I0514 00:18:07.385837    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b424a7cd98c"
	I0514 00:18:07.409610    4316 command_runner.go:130] ! I0514 00:17:28.349800       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:07.409610    4316 command_runner.go:130] ! I0514 00:17:28.349935       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:07.409610    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:07.410591    4316 command_runner.go:130] ! I0514 00:17:28.441282       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:07.410591    4316 command_runner.go:130] ! I0514 00:17:28.441413       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:07.410591    4316 command_runner.go:130] ! I0514 00:17:28.441441       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:07.410634    4316 command_runner.go:130] ! I0514 00:17:29.045047       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:07.410634    4316 command_runner.go:130] ! I0514 00:17:29.045110       1 main.go:227] handling current node
	I0514 00:18:07.410634    4316 command_runner.go:130] ! I0514 00:17:29.045545       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:07.410634    4316 command_runner.go:130] ! I0514 00:17:29.045580       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:07.410683    4316 command_runner.go:130] ! I0514 00:17:29.045839       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.109.58 Flags: [] Table: 0} 
	I0514 00:18:07.410683    4316 command_runner.go:130] ! I0514 00:17:29.045983       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:07.410683    4316 command_runner.go:130] ! I0514 00:17:29.045993       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:07.410722    4316 command_runner.go:130] ! I0514 00:17:29.046039       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.102.231 Flags: [] Table: 0} 
	I0514 00:18:07.410722    4316 command_runner.go:130] ! I0514 00:17:39.055904       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:07.410774    4316 command_runner.go:130] ! I0514 00:17:39.056127       1 main.go:227] handling current node
	I0514 00:18:07.410774    4316 command_runner.go:130] ! I0514 00:17:39.056141       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:07.410774    4316 command_runner.go:130] ! I0514 00:17:39.056155       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:07.410820    4316 command_runner.go:130] ! I0514 00:17:39.056412       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:07.410820    4316 command_runner.go:130] ! I0514 00:17:39.056502       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:07.410820    4316 command_runner.go:130] ! I0514 00:17:49.062369       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:07.410820    4316 command_runner.go:130] ! I0514 00:17:49.062453       1 main.go:227] handling current node
	I0514 00:18:07.410868    4316 command_runner.go:130] ! I0514 00:17:49.062465       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:07.410868    4316 command_runner.go:130] ! I0514 00:17:49.062483       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:07.410868    4316 command_runner.go:130] ! I0514 00:17:49.062816       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:07.410914    4316 command_runner.go:130] ! I0514 00:17:49.062843       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:07.410914    4316 command_runner.go:130] ! I0514 00:17:59.075229       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:07.410914    4316 command_runner.go:130] ! I0514 00:17:59.075506       1 main.go:227] handling current node
	I0514 00:18:07.410914    4316 command_runner.go:130] ! I0514 00:17:59.075588       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:07.410962    4316 command_runner.go:130] ! I0514 00:17:59.075650       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:07.410962    4316 command_runner.go:130] ! I0514 00:17:59.075827       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:07.410962    4316 command_runner.go:130] ! I0514 00:17:59.075835       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:09.927617    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:18:09.936837    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 200:
	ok
	I0514 00:18:09.937043    4316 round_trippers.go:463] GET https://172.23.102.122:8443/version
	I0514 00:18:09.937043    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:09.937043    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:09.937159    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:09.938884    4316 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0514 00:18:09.938884    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:09.938884    4316 round_trippers.go:580]     Content-Length: 263
	I0514 00:18:09.938884    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:10 GMT
	I0514 00:18:09.939089    4316 round_trippers.go:580]     Audit-Id: e22436c5-0691-4fc9-a5ea-405f5ed5ffca
	I0514 00:18:09.939089    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:09.939089    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:09.939089    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:09.939089    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:09.939089    4316 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0514 00:18:09.939089    4316 api_server.go:141] control plane version: v1.30.0
	I0514 00:18:09.939199    4316 api_server.go:131] duration metric: took 3.5675531s to wait for apiserver health ...
	I0514 00:18:09.939199    4316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 00:18:09.945769    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0514 00:18:09.967876    4316 command_runner.go:130] > da9e6534cd87
	I0514 00:18:09.968989    4316 logs.go:276] 1 containers: [da9e6534cd87]
	I0514 00:18:09.975518    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0514 00:18:09.994526    4316 command_runner.go:130] > 08450c853590
	I0514 00:18:09.994974    4316 logs.go:276] 1 containers: [08450c853590]
	I0514 00:18:10.001317    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0514 00:18:10.021786    4316 command_runner.go:130] > dcc5a109288b
	I0514 00:18:10.021786    4316 command_runner.go:130] > 76c5ab7859ef
	I0514 00:18:10.023439    4316 logs.go:276] 2 containers: [dcc5a109288b 76c5ab7859ef]
	I0514 00:18:10.034318    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0514 00:18:10.057359    4316 command_runner.go:130] > d3581c1c570c
	I0514 00:18:10.057461    4316 command_runner.go:130] > 964887fc5d36
	I0514 00:18:10.058059    4316 logs.go:276] 2 containers: [d3581c1c570c 964887fc5d36]
	I0514 00:18:10.065779    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0514 00:18:10.088604    4316 command_runner.go:130] > b2a1b31cd7de
	I0514 00:18:10.088887    4316 command_runner.go:130] > 91edaaa00da2
	I0514 00:18:10.088947    4316 logs.go:276] 2 containers: [b2a1b31cd7de 91edaaa00da2]
	I0514 00:18:10.097362    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0514 00:18:10.128764    4316 command_runner.go:130] > b87239d1199a
	I0514 00:18:10.128764    4316 command_runner.go:130] > e96f94398d6d
	I0514 00:18:10.128764    4316 logs.go:276] 2 containers: [b87239d1199a e96f94398d6d]
	I0514 00:18:10.137628    4316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0514 00:18:10.158775    4316 command_runner.go:130] > 2b424a7cd98c
	I0514 00:18:10.158775    4316 command_runner.go:130] > b7d8d9a5e5ea
	I0514 00:18:10.160257    4316 logs.go:276] 2 containers: [2b424a7cd98c b7d8d9a5e5ea]
	I0514 00:18:10.160343    4316 logs.go:123] Gathering logs for kindnet [b7d8d9a5e5ea] ...
	I0514 00:18:10.160343    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7d8d9a5e5ea"
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:16:57.751233       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:16:57.751585       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:10.192439    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:16:57.752181       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:16:57.752200       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:16:57.752221       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:17:01.123977       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:17:04.195495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:17:07.267636       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192439    4316 command_runner.go:130] ! I0514 00:17:10.339619       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192828    4316 command_runner.go:130] ! I0514 00:17:13.411801       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192859    4316 command_runner.go:130] ! panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:18:10.192859    4316 command_runner.go:130] ! goroutine 1 [running]:
	I0514 00:18:10.192859    4316 command_runner.go:130] ! main.main()
	I0514 00:18:10.192859    4316 command_runner.go:130] ! 	/go/src/cmd/kindnetd/main.go:195 +0xd3d
	I0514 00:18:10.195416    4316 logs.go:123] Gathering logs for describe nodes ...
	I0514 00:18:10.195416    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0514 00:18:10.385028    4316 command_runner.go:130] > Name:               multinode-101100
	I0514 00:18:10.385028    4316 command_runner.go:130] > Roles:              control-plane
	I0514 00:18:10.385028    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_56_10_0700
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:10.385028    4316 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0514 00:18:10.385266    4316 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0514 00:18:10.385266    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:10.385266    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:10.385266    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:10.385266    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:56:06 +0000
	I0514 00:18:10.385266    4316 command_runner.go:130] > Taints:             <none>
	I0514 00:18:10.385266    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:10.385266    4316 command_runner.go:130] > Lease:
	I0514 00:18:10.385339    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100
	I0514 00:18:10.385339    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:10.385339    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:18:06 +0000
	I0514 00:18:10.385339    4316 command_runner.go:130] > Conditions:
	I0514 00:18:10.385339    4316 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0514 00:18:10.385389    4316 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0514 00:18:10.385389    4316 command_runner.go:130] >   MemoryPressure   False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0514 00:18:10.385389    4316 command_runner.go:130] >   DiskPressure     False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0514 00:18:10.385389    4316 command_runner.go:130] >   PIDPressure      False   Tue, 14 May 2024 00:17:35 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0514 00:18:10.385389    4316 command_runner.go:130] >   Ready            True    Tue, 14 May 2024 00:17:35 +0000   Tue, 14 May 2024 00:17:35 +0000   KubeletReady                 kubelet is posting ready status
	I0514 00:18:10.385389    4316 command_runner.go:130] > Addresses:
	I0514 00:18:10.385533    4316 command_runner.go:130] >   InternalIP:  172.23.102.122
	I0514 00:18:10.385533    4316 command_runner.go:130] >   Hostname:    multinode-101100
	I0514 00:18:10.385533    4316 command_runner.go:130] > Capacity:
	I0514 00:18:10.385533    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.385594    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.385594    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.385594    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.385594    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.385594    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:10.385594    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.385594    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.385594    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.385594    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.385594    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.385594    4316 command_runner.go:130] > System Info:
	I0514 00:18:10.385668    4316 command_runner.go:130] >   Machine ID:                 5110a322e7104904905e303a94b950b6
	I0514 00:18:10.385668    4316 command_runner.go:130] >   System UUID:                9b23fe4d-6d34-444b-8185-a84d51d23610
	I0514 00:18:10.385704    4316 command_runner.go:130] >   Boot ID:                    2e73d191-2dbe-4055-a17d-cff8a9e53a15
	I0514 00:18:10.385704    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:10.385704    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:10.385704    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:10.385740    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:10.385740    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:10.385740    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:10.385798    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:10.385798    4316 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0514 00:18:10.385798    4316 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0514 00:18:10.385798    4316 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0514 00:18:10.385855    4316 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:10.385855    4316 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0514 00:18:10.385855    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-xqj6w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:10.385855    4316 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-4kmx4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	I0514 00:18:10.385927    4316 command_runner.go:130] >   kube-system                 etcd-multinode-101100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         75s
	I0514 00:18:10.385927    4316 command_runner.go:130] >   kube-system                 kindnet-9q2tv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0514 00:18:10.385927    4316 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-101100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	I0514 00:18:10.385965    4316 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-101100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0514 00:18:10.385965    4316 command_runner.go:130] >   kube-system                 kube-proxy-zhcz6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:10.386024    4316 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-101100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0514 00:18:10.386024    4316 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0514 00:18:10.386024    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:10.386024    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:10.386024    4316 command_runner.go:130] >   Resource           Requests     Limits
	I0514 00:18:10.386024    4316 command_runner.go:130] >   --------           --------     ------
	I0514 00:18:10.386080    4316 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0514 00:18:10.386080    4316 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0514 00:18:10.386080    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:10.386154    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0514 00:18:10.386154    4316 command_runner.go:130] > Events:
	I0514 00:18:10.386154    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:10.386154    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:10.386192    4316 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0514 00:18:10.386192    4316 command_runner.go:130] >   Normal  Starting                 72s                kube-proxy       
	I0514 00:18:10.386192    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:10.386192    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.386192    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:10.386251    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.386251    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m                kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:10.386251    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.386251    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m                kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.386313    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m                kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:10.386313    4316 command_runner.go:130] >   Normal  Starting                 22m                kubelet          Starting kubelet.
	I0514 00:18:10.386313    4316 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:10.386387    4316 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-101100 status is now: NodeReady
	I0514 00:18:10.386387    4316 command_runner.go:130] >   Normal  Starting                 81s                kubelet          Starting kubelet.
	I0514 00:18:10.386387    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.386444    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  80s (x8 over 81s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	I0514 00:18:10.386444    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    80s (x8 over 81s)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.386444    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     80s (x7 over 81s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	I0514 00:18:10.386496    4316 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	I0514 00:18:10.386496    4316 command_runner.go:130] > Name:               multinode-101100-m02
	I0514 00:18:10.386496    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:10.386496    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:10.386530    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:10.386530    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:10.386530    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m02
	I0514 00:18:10.386530    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_13T23_59_02_0700
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:10.386576    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:10.386576    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:10.386649    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:10.386685    4316 command_runner.go:130] > CreationTimestamp:  Mon, 13 May 2024 23:59:02 +0000
	I0514 00:18:10.386685    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:10.386685    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:10.386685    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:10.386685    4316 command_runner.go:130] > Lease:
	I0514 00:18:10.386685    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m02
	I0514 00:18:10.386745    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:10.386745    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:52 +0000
	I0514 00:18:10.386745    4316 command_runner.go:130] > Conditions:
	I0514 00:18:10.386781    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:10.386781    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:10.386781    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.386826    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.386826    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.386826    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:10:15 +0000   Tue, 14 May 2024 00:14:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.386826    4316 command_runner.go:130] > Addresses:
	I0514 00:18:10.386826    4316 command_runner.go:130] >   InternalIP:  172.23.109.58
	I0514 00:18:10.386826    4316 command_runner.go:130] >   Hostname:    multinode-101100-m02
	I0514 00:18:10.386899    4316 command_runner.go:130] > Capacity:
	I0514 00:18:10.386899    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.386899    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.386899    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.386934    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.386934    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.386934    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:10.386934    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.386934    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.386934    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.386980    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.386980    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.386980    4316 command_runner.go:130] > System Info:
	I0514 00:18:10.386980    4316 command_runner.go:130] >   Machine ID:                 8d348bb1bbc048f4b99c681873b42d63
	I0514 00:18:10.386980    4316 command_runner.go:130] >   System UUID:                4330851b-5248-f245-9378-5fc25e670b55
	I0514 00:18:10.386980    4316 command_runner.go:130] >   Boot ID:                    9f102be6-1468-4570-8696-97e5ce51649a
	I0514 00:18:10.386980    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:10.387052    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:10.387052    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:10.387052    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:10.387088    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:10.387088    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:10.387088    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:10.387088    4316 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0514 00:18:10.387088    4316 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0514 00:18:10.387150    4316 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0514 00:18:10.387150    4316 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:10.387150    4316 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0514 00:18:10.387188    4316 command_runner.go:130] >   default                     busybox-fc5497c4f-q7442    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0514 00:18:10.387188    4316 command_runner.go:130] >   kube-system                 kindnet-2lwsm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	I0514 00:18:10.387225    4316 command_runner.go:130] >   kube-system                 kube-proxy-b25hq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0514 00:18:10.387225    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:10.387225    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:10.387225    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:10.387225    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:10.387225    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:10.387282    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:10.387282    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:10.387282    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:10.387282    4316 command_runner.go:130] > Events:
	I0514 00:18:10.387282    4316 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0514 00:18:10.387282    4316 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0514 00:18:10.387282    4316 command_runner.go:130] >   Normal  Starting                 18m                kube-proxy       
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  RegisteredNode           19m                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeReady                18m                kubelet          Node multinode-101100-m02 status is now: NodeReady
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  NodeNotReady             3m38s              node-controller  Node multinode-101100-m02 status is now: NodeNotReady
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	I0514 00:18:10.387356    4316 command_runner.go:130] > Name:               multinode-101100-m03
	I0514 00:18:10.387356    4316 command_runner.go:130] > Roles:              <none>
	I0514 00:18:10.387356    4316 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     kubernetes.io/hostname=multinode-101100-m03
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     kubernetes.io/os=linux
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     minikube.k8s.io/name=multinode-101100
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_14T00_12_45_0700
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0514 00:18:10.387356    4316 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0514 00:18:10.387356    4316 command_runner.go:130] > CreationTimestamp:  Tue, 14 May 2024 00:12:44 +0000
	I0514 00:18:10.387356    4316 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0514 00:18:10.387356    4316 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0514 00:18:10.387356    4316 command_runner.go:130] > Unschedulable:      false
	I0514 00:18:10.387356    4316 command_runner.go:130] > Lease:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   HolderIdentity:  multinode-101100-m03
	I0514 00:18:10.387356    4316 command_runner.go:130] >   AcquireTime:     <unset>
	I0514 00:18:10.387356    4316 command_runner.go:130] >   RenewTime:       Tue, 14 May 2024 00:13:36 +0000
	I0514 00:18:10.387356    4316 command_runner.go:130] > Conditions:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0514 00:18:10.387356    4316 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0514 00:18:10.387356    4316 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.387356    4316 command_runner.go:130] >   DiskPressure     Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.387356    4316 command_runner.go:130] >   PIDPressure      Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Ready            Unknown   Tue, 14 May 2024 00:12:49 +0000   Tue, 14 May 2024 00:14:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0514 00:18:10.387356    4316 command_runner.go:130] > Addresses:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   InternalIP:  172.23.102.231
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Hostname:    multinode-101100-m03
	I0514 00:18:10.387356    4316 command_runner.go:130] > Capacity:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.387356    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.387356    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.387356    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.387356    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.387356    4316 command_runner.go:130] > Allocatable:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   cpu:                2
	I0514 00:18:10.387356    4316 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0514 00:18:10.387356    4316 command_runner.go:130] >   hugepages-2Mi:      0
	I0514 00:18:10.387356    4316 command_runner.go:130] >   memory:             2164264Ki
	I0514 00:18:10.387356    4316 command_runner.go:130] >   pods:               110
	I0514 00:18:10.387356    4316 command_runner.go:130] > System Info:
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Machine ID:                 11c3fac528de4278b1dafef49e54ea09
	I0514 00:18:10.387356    4316 command_runner.go:130] >   System UUID:                0ee228e5-87a6-0549-9a8d-1747b73431ee
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Boot ID:                    d5c1e04c-3081-4871-912e-a86507b8e24a
	I0514 00:18:10.387356    4316 command_runner.go:130] >   Kernel Version:             5.10.207
	I0514 00:18:10.387356    4316 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0514 00:18:10.387909    4316 command_runner.go:130] >   Operating System:           linux
	I0514 00:18:10.387909    4316 command_runner.go:130] >   Architecture:               amd64
	I0514 00:18:10.387909    4316 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0514 00:18:10.387949    4316 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0514 00:18:10.387949    4316 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0514 00:18:10.387949    4316 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0514 00:18:10.387949    4316 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0514 00:18:10.387992    4316 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0514 00:18:10.387992    4316 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0514 00:18:10.388024    4316 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0514 00:18:10.388051    4316 command_runner.go:130] >   kube-system                 kindnet-tfbt8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	I0514 00:18:10.388051    4316 command_runner.go:130] >   kube-system                 kube-proxy-8zsgn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	I0514 00:18:10.388051    4316 command_runner.go:130] > Allocated resources:
	I0514 00:18:10.388051    4316 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Resource           Requests   Limits
	I0514 00:18:10.388051    4316 command_runner.go:130] >   --------           --------   ------
	I0514 00:18:10.388051    4316 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0514 00:18:10.388051    4316 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0514 00:18:10.388051    4316 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:10.388051    4316 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0514 00:18:10.388051    4316 command_runner.go:130] > Events:
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0514 00:18:10.388051    4316 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  Starting                 5m22s                  kube-proxy       
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  Starting                 14m                    kube-proxy       
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     14m (x2 over 14m)      kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeReady                14m                    kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  Starting                 5m26s                  kubelet          Starting kubelet.
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m26s (x2 over 5m26s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m26s (x2 over 5m26s)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m26s (x2 over 5m26s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  RegisteredNode           5m23s                  node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeReady                5m21s                  kubelet          Node multinode-101100-m03 status is now: NodeReady
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  NodeNotReady             3m53s                  node-controller  Node multinode-101100-m03 status is now: NodeNotReady
	I0514 00:18:10.388051    4316 command_runner.go:130] >   Normal  RegisteredNode           63s                    node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	I0514 00:18:10.397595    4316 logs.go:123] Gathering logs for coredns [76c5ab7859ef] ...
	I0514 00:18:10.397595    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5ab7859ef"
	I0514 00:18:10.424991    4316 command_runner.go:130] > .:53
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:10.424991    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:10.424991    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 127.0.0.1:57161 - 45698 "HINFO IN 8990392176501838712.5889638972791529478. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051692136s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:55099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211505s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:55878 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.185519855s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:33619 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.15684109s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:49440 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.197645067s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:50960 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430608s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:46839 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000167103s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:55330 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000155803s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:50874 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000131802s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:53724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096802s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:59752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.042707366s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:54429 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000269706s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:48558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000262605s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:46986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023487677s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:60460 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174903s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:60672 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204304s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.1.2:36311 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110402s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:43910 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000301006s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:52495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000145803s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:46357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066702s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:41390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062301s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:35739 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084301s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:44800 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163303s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:57631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068702s
	I0514 00:18:10.424991    4316 command_runner.go:130] > [INFO] 10.244.0.3:50842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135702s
	I0514 00:18:10.425547    4316 command_runner.go:130] > [INFO] 10.244.1.2:41210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204604s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:57858 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073801s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:48782 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152303s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:36081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121002s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:46909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115002s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:36030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220205s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:56187 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059401s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:51500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099802s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:57247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147903s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:46132 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170203s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:57206 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000452309s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.1.2:44795 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000146203s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:33385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082102s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:56742 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173704s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:46927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185904s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] 10.244.0.3:42956 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000054801s
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0514 00:18:10.425601    4316 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0514 00:18:10.428888    4316 logs.go:123] Gathering logs for kube-scheduler [964887fc5d36] ...
	I0514 00:18:10.428888    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964887fc5d36"
	I0514 00:18:10.452924    4316 command_runner.go:130] ! I0513 23:56:04.693680       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:10.453023    4316 command_runner.go:130] ! W0513 23:56:06.133341       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:10.453023    4316 command_runner.go:130] ! W0513 23:56:06.133396       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.453069    4316 command_runner.go:130] ! W0513 23:56:06.133407       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.133415       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.170291       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.170533       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.174536       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.174684       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.174703       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:10.453093    4316 command_runner.go:130] ! I0513 23:56:06.174918       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.182722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.186053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.183583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.183698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.183781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.183835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.183868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.184039       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.186929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.186969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! W0513 23:56:06.187026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.188647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.188112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.188121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.188233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.188242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.189252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.189533       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.453093    4316 command_runner.go:130] ! E0513 23:56:06.189643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453621    4316 command_runner.go:130] ! E0513 23:56:06.189773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453663    4316 command_runner.go:130] ! W0513 23:56:06.190106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:10.453663    4316 command_runner.go:130] ! E0513 23:56:06.190324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:10.453698    4316 command_runner.go:130] ! W0513 23:56:06.190538       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:10.453733    4316 command_runner.go:130] ! E0513 23:56:06.191036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:10.453761    4316 command_runner.go:130] ! W0513 23:56:06.191581       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:10.453761    4316 command_runner.go:130] ! E0513 23:56:06.192160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:10.453761    4316 command_runner.go:130] ! W0513 23:56:06.191626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453843    4316 command_runner.go:130] ! E0513 23:56:06.192721       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.453843    4316 command_runner.go:130] ! W0513 23:56:06.190821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:10.453890    4316 command_runner.go:130] ! E0513 23:56:06.193134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:10.453890    4316 command_runner.go:130] ! W0513 23:56:07.154218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:10.453930    4316 command_runner.go:130] ! E0513 23:56:07.155376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:18:10.453965    4316 command_runner.go:130] ! W0513 23:56:07.229548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:10.454003    4316 command_runner.go:130] ! E0513 23:56:07.229613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.344429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.344853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.410556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.410716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.423084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.423126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.467897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.467939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.484903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.485019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.545758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.546087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.573884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.573980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.633780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.633901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.680821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.680938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.704130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.704357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.736914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.454027    4316 command_runner.go:130] ! E0513 23:56:07.737079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.454027    4316 command_runner.go:130] ! W0513 23:56:07.754367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:10.454555    4316 command_runner.go:130] ! E0513 23:56:07.754798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0514 00:18:10.454555    4316 command_runner.go:130] ! I0513 23:56:09.676327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:10.454605    4316 command_runner.go:130] ! E0514 00:14:35.689344       1 run.go:74] "command failed" err="finished without leader elect"
	I0514 00:18:10.465686    4316 logs.go:123] Gathering logs for kube-proxy [b2a1b31cd7de] ...
	I0514 00:18:10.465686    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a1b31cd7de"
	I0514 00:18:10.489542    4316 command_runner.go:130] ! I0514 00:16:57.528613       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:10.489749    4316 command_runner.go:130] ! I0514 00:16:57.562847       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.122"]
	I0514 00:18:10.489749    4316 command_runner.go:130] ! I0514 00:16:57.701301       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:10.489749    4316 command_runner.go:130] ! I0514 00:16:57.701447       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:10.489749    4316 command_runner.go:130] ! I0514 00:16:57.701476       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.708219       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.708800       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.708841       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.712422       1 config.go:192] "Starting service config controller"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.712733       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.712780       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.712824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.715339       1 config.go:319] "Starting node config controller"
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.717651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.815732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.815811       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:10.489833    4316 command_runner.go:130] ! I0514 00:16:57.818050       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:10.491666    4316 logs.go:123] Gathering logs for kube-proxy [91edaaa00da2] ...
	I0514 00:18:10.491754    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91edaaa00da2"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.901713       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.929714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.106.39"]
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.982680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.982795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.982816       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.988669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.989566       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.989671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.992700       1 config.go:192] "Starting service config controller"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.993131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.993327       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.993339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.994714       1 config.go:319] "Starting node config controller"
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:24.994744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:25.094420       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:25.094530       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:18:10.515865    4316 command_runner.go:130] ! I0513 23:56:25.094981       1 shared_informer.go:320] Caches are synced for node config
	I0514 00:18:10.518267    4316 logs.go:123] Gathering logs for kube-controller-manager [e96f94398d6d] ...
	I0514 00:18:10.518267    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e96f94398d6d"
	I0514 00:18:10.548103    4316 command_runner.go:130] ! I0513 23:56:04.448604       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:10.549011    4316 command_runner.go:130] ! I0513 23:56:04.932336       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:10.549011    4316 command_runner.go:130] ! I0513 23:56:04.932378       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:04.934044       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:04.934133       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:04.934796       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:04.935005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:09.124957       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:09.125092       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:09.140996       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:09.141447       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:10.549093    4316 command_runner.go:130] ! I0513 23:56:09.141567       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:10.549248    4316 command_runner.go:130] ! I0513 23:56:09.156847       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:10.549248    4316 command_runner.go:130] ! I0513 23:56:09.157241       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:10.549248    4316 command_runner.go:130] ! I0513 23:56:09.157455       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:10.549248    4316 command_runner.go:130] ! I0513 23:56:09.170795       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.171005       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.171684       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.171921       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.172144       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.183975       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:10.549335    4316 command_runner.go:130] ! I0513 23:56:09.184362       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:10.549466    4316 command_runner.go:130] ! I0513 23:56:09.185233       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:10.549466    4316 command_runner.go:130] ! I0513 23:56:09.230173       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:10.549466    4316 command_runner.go:130] ! I0513 23:56:09.242679       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:10.549466    4316 command_runner.go:130] ! I0513 23:56:09.242735       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:10.549574    4316 command_runner.go:130] ! I0513 23:56:09.242821       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:10.549574    4316 command_runner.go:130] ! I0513 23:56:09.249513       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:10.549574    4316 command_runner.go:130] ! I0513 23:56:09.249614       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:10.549660    4316 command_runner.go:130] ! I0513 23:56:09.249731       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:10.549660    4316 command_runner.go:130] ! I0513 23:56:09.249824       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:10.549743    4316 command_runner.go:130] ! I0513 23:56:09.249912       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:10.549743    4316 command_runner.go:130] ! I0513 23:56:09.250132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:10.549743    4316 command_runner.go:130] ! I0513 23:56:09.250216       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:10.549832    4316 command_runner.go:130] ! I0513 23:56:09.250270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:10.549832    4316 command_runner.go:130] ! I0513 23:56:09.250425       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:10.549832    4316 command_runner.go:130] ! I0513 23:56:09.250604       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:10.549918    4316 command_runner.go:130] ! I0513 23:56:09.250656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:10.549918    4316 command_runner.go:130] ! I0513 23:56:09.250695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:10.549918    4316 command_runner.go:130] ! I0513 23:56:09.250745       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:10.550010    4316 command_runner.go:130] ! I0513 23:56:09.250794       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:10.550010    4316 command_runner.go:130] ! I0513 23:56:09.250851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:10.550010    4316 command_runner.go:130] ! I0513 23:56:09.250883       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:10.550010    4316 command_runner.go:130] ! I0513 23:56:09.250994       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:10.550110    4316 command_runner.go:130] ! I0513 23:56:09.251028       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:10.550133    4316 command_runner.go:130] ! I0513 23:56:09.251909       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:10.550133    4316 command_runner.go:130] ! I0513 23:56:09.251999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:10.550133    4316 command_runner.go:130] ! I0513 23:56:09.252142       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:10.550133    4316 command_runner.go:130] ! I0513 23:56:09.305089       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:10.550218    4316 command_runner.go:130] ! I0513 23:56:09.305302       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:10.550218    4316 command_runner.go:130] ! I0513 23:56:09.305357       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:10.550218    4316 command_runner.go:130] ! I0513 23:56:09.305376       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:10.550301    4316 command_runner.go:130] ! I0513 23:56:09.321907       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:10.550301    4316 command_runner.go:130] ! I0513 23:56:09.322244       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:10.550301    4316 command_runner.go:130] ! I0513 23:56:09.322270       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:10.550301    4316 command_runner.go:130] ! I0513 23:56:09.324160       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:10.550301    4316 command_runner.go:130] ! I0513 23:56:09.324208       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:10.550392    4316 command_runner.go:130] ! E0513 23:56:09.334850       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:10.550392    4316 command_runner.go:130] ! I0513 23:56:09.335135       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:10.550478    4316 command_runner.go:130] ! I0513 23:56:09.346530       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:10.550478    4316 command_runner.go:130] ! I0513 23:56:09.346809       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:10.550478    4316 command_runner.go:130] ! I0513 23:56:09.346883       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:10.550478    4316 command_runner.go:130] ! I0513 23:56:09.385297       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:10.550564    4316 command_runner.go:130] ! I0513 23:56:09.385391       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:10.550564    4316 command_runner.go:130] ! I0513 23:56:09.385403       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:10.550564    4316 command_runner.go:130] ! I0513 23:56:09.542113       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:10.550564    4316 command_runner.go:130] ! I0513 23:56:09.542271       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:10.550654    4316 command_runner.go:130] ! I0513 23:56:09.542284       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:10.550654    4316 command_runner.go:130] ! I0513 23:56:09.581300       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:10.550654    4316 command_runner.go:130] ! I0513 23:56:09.581321       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:10.550742    4316 command_runner.go:130] ! I0513 23:56:09.581454       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.550742    4316 command_runner.go:130] ! I0513 23:56:09.581971       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:10.550742    4316 command_runner.go:130] ! I0513 23:56:09.582008       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:10.550742    4316 command_runner.go:130] ! I0513 23:56:09.582030       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.550833    4316 command_runner.go:130] ! I0513 23:56:09.582896       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:10.550833    4316 command_runner.go:130] ! I0513 23:56:09.582908       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:10.550833    4316 command_runner.go:130] ! I0513 23:56:09.582922       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.550833    4316 command_runner.go:130] ! I0513 23:56:09.583436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:10.550926    4316 command_runner.go:130] ! I0513 23:56:09.583678       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:10.550926    4316 command_runner.go:130] ! I0513 23:56:09.583691       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:10.550926    4316 command_runner.go:130] ! I0513 23:56:09.583727       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.734073       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.734159       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.734446       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.885354       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.885756       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:10.551014    4316 command_runner.go:130] ! I0513 23:56:09.885934       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:10.551134    4316 command_runner.go:130] ! I0513 23:56:10.040288       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:10.551134    4316 command_runner.go:130] ! I0513 23:56:10.040486       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:10.551134    4316 command_runner.go:130] ! I0513 23:56:20.090311       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:10.551224    4316 command_runner.go:130] ! I0513 23:56:20.090418       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:10.551224    4316 command_runner.go:130] ! I0513 23:56:20.090428       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:10.551224    4316 command_runner.go:130] ! I0513 23:56:20.090911       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:10.551224    4316 command_runner.go:130] ! I0513 23:56:20.091093       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:10.551224    4316 command_runner.go:130] ! I0513 23:56:20.101598       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:10.551294    4316 command_runner.go:130] ! I0513 23:56:20.101778       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:10.551294    4316 command_runner.go:130] ! I0513 23:56:20.101805       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:10.551294    4316 command_runner.go:130] ! I0513 23:56:20.114509       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:10.551294    4316 command_runner.go:130] ! I0513 23:56:20.114580       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:10.551365    4316 command_runner.go:130] ! I0513 23:56:20.114849       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:10.551365    4316 command_runner.go:130] ! I0513 23:56:20.114678       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:10.551365    4316 command_runner.go:130] ! I0513 23:56:20.115038       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:10.551436    4316 command_runner.go:130] ! I0513 23:56:20.115048       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:10.551436    4316 command_runner.go:130] ! E0513 23:56:20.117646       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:10.551436    4316 command_runner.go:130] ! I0513 23:56:20.117865       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:10.551436    4316 command_runner.go:130] ! I0513 23:56:20.130498       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:10.551506    4316 command_runner.go:130] ! I0513 23:56:20.130711       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:10.551506    4316 command_runner.go:130] ! I0513 23:56:20.130932       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:10.551506    4316 command_runner.go:130] ! I0513 23:56:20.143035       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:10.551506    4316 command_runner.go:130] ! I0513 23:56:20.143414       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:10.551582    4316 command_runner.go:130] ! I0513 23:56:20.143607       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:10.551582    4316 command_runner.go:130] ! I0513 23:56:20.160023       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:10.551582    4316 command_runner.go:130] ! I0513 23:56:20.160191       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:10.551582    4316 command_runner.go:130] ! I0513 23:56:20.160215       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:10.551582    4316 command_runner.go:130] ! I0513 23:56:20.170613       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:10.551659    4316 command_runner.go:130] ! I0513 23:56:20.170951       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:10.551659    4316 command_runner.go:130] ! I0513 23:56:20.171064       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:10.551659    4316 command_runner.go:130] ! I0513 23:56:20.179840       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:10.551659    4316 command_runner.go:130] ! I0513 23:56:20.180447       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:10.551746    4316 command_runner.go:130] ! I0513 23:56:20.180590       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:10.551746    4316 command_runner.go:130] ! I0513 23:56:20.190977       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:10.551746    4316 command_runner.go:130] ! I0513 23:56:20.191286       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:10.551746    4316 command_runner.go:130] ! I0513 23:56:20.191448       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:10.551830    4316 command_runner.go:130] ! I0513 23:56:20.204888       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:10.551830    4316 command_runner.go:130] ! I0513 23:56:20.205578       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:10.551830    4316 command_runner.go:130] ! I0513 23:56:20.205670       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:10.551830    4316 command_runner.go:130] ! I0513 23:56:20.239034       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:10.551830    4316 command_runner.go:130] ! I0513 23:56:20.239193       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:10.551909    4316 command_runner.go:130] ! I0513 23:56:20.239262       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:10.551909    4316 command_runner.go:130] ! I0513 23:56:20.482568       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:10.551909    4316 command_runner.go:130] ! I0513 23:56:20.486046       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:10.551909    4316 command_runner.go:130] ! I0513 23:56:20.486073       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:10.551909    4316 command_runner.go:130] ! I0513 23:56:20.486093       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:20.786163       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:20.786358       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.082938       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.083657       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.083743       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.238006       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.238099       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.238152       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.238163       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.283674       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.283751       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.283986       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.284217       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.442664       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.442840       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.442854       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.587997       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.588249       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.588322       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.740205       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.740392       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.740547       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.889738       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.890053       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:21.890145       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.038114       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.038197       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.038216       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.038314       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.038329       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.291303       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.291332       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.291999       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.299124       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.317101       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.321553       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.322540       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.335837       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.339493       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.339494       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.339605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.340940       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.341044       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.342309       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.343675       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.343828       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.347539       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.357773       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.361377       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.372019       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.380620       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.382092       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.382250       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.382979       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.384565       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.384604       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:10.552128    4316 command_runner.go:130] ! I0513 23:56:22.384724       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.386009       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.386117       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.386299       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.389103       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.390596       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.391278       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.391538       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.391663       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:10.553146    4316 command_runner.go:130] ! I0513 23:56:22.392031       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:10.553258    4316 command_runner.go:130] ! I0513 23:56:22.392207       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:10.553258    4316 command_runner.go:130] ! I0513 23:56:22.392242       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:10.553258    4316 command_runner.go:130] ! I0513 23:56:22.392249       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:10.553258    4316 command_runner.go:130] ! I0513 23:56:22.402105       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:10.553335    4316 command_runner.go:130] ! I0513 23:56:22.405500       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:10.553335    4316 command_runner.go:130] ! I0513 23:56:22.406927       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:10.553335    4316 command_runner.go:130] ! I0513 23:56:22.411111       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100" podCIDRs=["10.244.0.0/24"]
	I0514 00:18:10.553335    4316 command_runner.go:130] ! I0513 23:56:22.431075       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:10.553405    4316 command_runner.go:130] ! I0513 23:56:22.443663       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:10.553405    4316 command_runner.go:130] ! I0513 23:56:22.552382       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:10.553405    4316 command_runner.go:130] ! I0513 23:56:22.573274       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:10.553405    4316 command_runner.go:130] ! I0513 23:56:22.573442       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:10.553471    4316 command_runner.go:130] ! I0513 23:56:22.573935       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:10.553471    4316 command_runner.go:130] ! I0513 23:56:22.574179       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:10.553471    4316 command_runner.go:130] ! I0513 23:56:22.586849       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:10.553471    4316 command_runner.go:130] ! I0513 23:56:22.602574       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:10.553543    4316 command_runner.go:130] ! I0513 23:56:23.018846       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:10.553543    4316 command_runner.go:130] ! I0513 23:56:23.087540       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:10.553625    4316 command_runner.go:130] ! I0513 23:56:23.087631       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:10.553625    4316 command_runner.go:130] ! I0513 23:56:23.691681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="593.37356ms"
	I0514 00:18:10.553625    4316 command_runner.go:130] ! I0513 23:56:23.736584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.765409ms"
	I0514 00:18:10.553625    4316 command_runner.go:130] ! I0513 23:56:23.736691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.105µs"
	I0514 00:18:10.553716    4316 command_runner.go:130] ! I0513 23:56:23.741069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.307µs"
	I0514 00:18:10.553716    4316 command_runner.go:130] ! I0513 23:56:24.558346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.410112ms"
	I0514 00:18:10.553716    4316 command_runner.go:130] ! I0513 23:56:24.599621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.388659ms"
	I0514 00:18:10.553793    4316 command_runner.go:130] ! I0513 23:56:24.599778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.705µs"
	I0514 00:18:10.553793    4316 command_runner.go:130] ! I0513 23:56:35.460855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.604µs"
	I0514 00:18:10.553793    4316 command_runner.go:130] ! I0513 23:56:35.495875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.404µs"
	I0514 00:18:10.553793    4316 command_runner.go:130] ! I0513 23:56:36.868700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.505µs"
	I0514 00:18:10.553865    4316 command_runner.go:130] ! I0513 23:56:36.916603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.935352ms"
	I0514 00:18:10.553865    4316 command_runner.go:130] ! I0513 23:56:36.917123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.803µs"
	I0514 00:18:10.553865    4316 command_runner.go:130] ! I0513 23:56:37.577172       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:10.553932    4316 command_runner.go:130] ! I0513 23:59:02.230067       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:10.553932    4316 command_runner.go:130] ! I0513 23:59:02.246355       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m02" podCIDRs=["10.244.1.0/24"]
	I0514 00:18:10.553932    4316 command_runner.go:130] ! I0513 23:59:02.603699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:10.554002    4316 command_runner.go:130] ! I0513 23:59:22.527169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554002    4316 command_runner.go:130] ! I0513 23:59:45.791856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.887671ms"
	I0514 00:18:10.554002    4316 command_runner.go:130] ! I0513 23:59:45.808219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.096894ms"
	I0514 00:18:10.554071    4316 command_runner.go:130] ! I0513 23:59:45.808747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.005µs"
	I0514 00:18:10.554071    4316 command_runner.go:130] ! I0513 23:59:45.809833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.705µs"
	I0514 00:18:10.554071    4316 command_runner.go:130] ! I0513 23:59:45.811263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.604µs"
	I0514 00:18:10.554071    4316 command_runner.go:130] ! I0513 23:59:48.526617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.926472ms"
	I0514 00:18:10.554071    4316 command_runner.go:130] ! I0513 23:59:48.529326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.302µs"
	I0514 00:18:10.554175    4316 command_runner.go:130] ! I0513 23:59:48.555529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.972453ms"
	I0514 00:18:10.554195    4316 command_runner.go:130] ! I0513 23:59:48.556317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.601µs"
	I0514 00:18:10.554195    4316 command_runner.go:130] ! I0514 00:03:17.563212       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554195    4316 command_runner.go:130] ! I0514 00:03:17.565297       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:10.554266    4316 command_runner.go:130] ! I0514 00:03:17.579900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.2.0/24"]
	I0514 00:18:10.554266    4316 command_runner.go:130] ! I0514 00:03:17.665892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:10.554266    4316 command_runner.go:130] ! I0514 00:03:38.035898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554350    4316 command_runner.go:130] ! I0514 00:10:17.797191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554350    4316 command_runner.go:130] ! I0514 00:12:39.070271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554350    4316 command_runner.go:130] ! I0514 00:12:44.527915       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554434    4316 command_runner.go:130] ! I0514 00:12:44.528275       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:10.554434    4316 command_runner.go:130] ! I0514 00:12:44.543895       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.3.0/24"]
	I0514 00:18:10.554434    4316 command_runner.go:130] ! I0514 00:12:49.983419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554513    4316 command_runner.go:130] ! I0514 00:14:17.920991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:18:10.554538    4316 command_runner.go:130] ! I0514 00:14:33.013074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.740609ms"
	I0514 00:18:10.554569    4316 command_runner.go:130] ! I0514 00:14:33.013918       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.506µs"
	I0514 00:18:10.569999    4316 logs.go:123] Gathering logs for kindnet [2b424a7cd98c] ...
	I0514 00:18:10.569999    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b424a7cd98c"
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:28.349800       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:28.349935       1 main.go:107] hostIP = 172.23.102.122
	I0514 00:18:10.593766    4316 command_runner.go:130] ! podIP = 172.23.102.122
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:28.441282       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:28.441413       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:28.441441       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:29.045047       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:29.045110       1 main.go:227] handling current node
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:29.045545       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:10.593766    4316 command_runner.go:130] ! I0514 00:17:29.045580       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:10.594304    4316 command_runner.go:130] ! I0514 00:17:29.045839       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.109.58 Flags: [] Table: 0} 
	I0514 00:18:10.594304    4316 command_runner.go:130] ! I0514 00:17:29.045983       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:29.045993       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:29.046039       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.102.231 Flags: [] Table: 0} 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.055904       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.056127       1 main.go:227] handling current node
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.056141       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.056155       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.056412       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:39.056502       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062369       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062453       1 main.go:227] handling current node
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062465       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062483       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062816       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:49.062843       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075229       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075506       1 main.go:227] handling current node
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075588       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075650       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075827       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:17:59.075835       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.090534       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.090748       1 main.go:227] handling current node
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.090769       1 main.go:223] Handling node with IPs: map[172.23.109.58:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.090777       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.091233       1 main.go:223] Handling node with IPs: map[172.23.102.231:{}]
	I0514 00:18:10.594381    4316 command_runner.go:130] ! I0514 00:18:09.091328       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.3.0/24] 
	I0514 00:18:10.598592    4316 logs.go:123] Gathering logs for Docker ...
	I0514 00:18:10.598694    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0514 00:18:10.620939    4316 command_runner.go:130] > May 14 00:15:30 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:10.620939    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:10.620939    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:10.620939    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:10.620939    4316 command_runner.go:130] > May 14 00:15:30 minikube cri-dockerd[223]: time="2024-05-14T00:15:30Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:31 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.621776    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:10.621882    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:10.621882    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:10.621882    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:10.621962    4316 command_runner.go:130] > May 14 00:15:33 minikube cri-dockerd[418]: time="2024-05-14T00:15:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:10.622183    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:10.622183    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:10.622183    4316 command_runner.go:130] > May 14 00:15:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.622259    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0514 00:18:10.622259    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.622259    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:10.622259    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:10.622259    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:10.622335    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:10.622386    4316 command_runner.go:130] > May 14 00:15:36 minikube cri-dockerd[426]: time="2024-05-14T00:15:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0514 00:18:10.622386    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:10.622420    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:10.622449    4316 command_runner.go:130] > May 14 00:15:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.622449    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0514 00:18:10.622449    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.622510    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0514 00:18:10.622510    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0514 00:18:10.622510    4316 command_runner.go:130] > May 14 00:15:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.622572    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:10.622572    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349024460Z" level=info msg="Starting up"
	I0514 00:18:10.622572    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.349886331Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:10.622623    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[654]: time="2024-05-14T00:16:17.351031392Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	I0514 00:18:10.622657    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.380428255Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:10.622657    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407060046Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:10.622703    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407104860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:10.622703    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407157277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:10.622734    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.407182685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.622781    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408093872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.622781    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408200005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.622812    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408421875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.622859    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408522107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.622859    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408552116Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:10.622914    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.408565820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.622986    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409126597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.623018    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.409855027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.623018    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412841968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.623018    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.412982412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.623547    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413109352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.623588    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.413195779Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:10.623681    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414192994Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:10.623728    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414303628Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:10.623728    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.414321234Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:10.623768    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420644226Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:10.623856    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420793973Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:10.623902    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420815380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:10.623942    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420835086Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:10.623942    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.420849391Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:10.623991    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421006640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:10.624030    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421303834Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:10.624077    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421395163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:10.624118    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421479890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:10.624118    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421494994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:10.624204    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421507198Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624250    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421523703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624290    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421540509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624290    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421554613Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624338    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421571518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624377    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421584022Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.624424    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421594526Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.627440    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421604629Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.628010    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421626336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628010    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421639040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628062    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421651344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628062    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421662947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421673350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421684554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421695257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421705961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421717564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421730268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421774782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421787286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421797990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421811094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421828299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421838703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421849206Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421898721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421926330Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:10.628092    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.421987549Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:10.628684    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422004755Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:10.628762    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422070276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.628808    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422106987Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422118891Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422453196Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422571233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422619148Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:17 multinode-101100 dockerd[660]: time="2024-05-14T00:16:17.422687970Z" level=info msg="containerd successfully booted in 0.044863s"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.404653025Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:18 multinode-101100 dockerd[654]: time="2024-05-14T00:16:18.578701970Z" level=info msg="Loading containers: start."
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.027152626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.105905244Z" level=info msg="Loading containers: done."
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.135340666Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.136139953Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.185948604Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 dockerd[654]: time="2024-05-14T00:16:19.186071317Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:19 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 systemd[1]: Stopping Docker Application Container Engine...
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.988898314Z" level=info msg="Processing signal 'terminated'"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.989838579Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990583130Z" level=info msg="Daemon shutdown complete"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990661536Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:41 multinode-101100 dockerd[654]: time="2024-05-14T00:16:41.990696238Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: docker.service: Deactivated successfully.
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:42 multinode-101100 systemd[1]: Stopped Docker Application Container Engine.
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 systemd[1]: Starting Docker Application Container Engine...
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.059729298Z" level=info msg="Starting up"
	I0514 00:18:10.628848    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.060541955Z" level=info msg="containerd not running, starting managed containerd"
	I0514 00:18:10.629377    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:43.061850245Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1055
	I0514 00:18:10.629417    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.092613476Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0514 00:18:10.629466    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115368453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0514 00:18:10.629466    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115403155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115435257Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115450359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115473760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115486261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115635771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115738478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115756280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115766280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.115789882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.116031099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119790059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.119888566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120181886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0514 00:18:10.629498    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120287794Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0514 00:18:10.630014    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120385900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0514 00:18:10.630103    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120406702Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0514 00:18:10.630154    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120419603Z" level=info msg="metadata content store policy set" policy=shared
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120713023Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120746825Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120760126Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120773227Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120785328Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120826831Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.120999543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121054147Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121092049Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121102050Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121115951Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121126152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121135052Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121145153Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121156354Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121165854Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121175255Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121184656Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121204657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121216358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121225759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121235159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121243960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121254361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121263161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121275762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121287763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121299564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121364668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121378369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121388070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121400871Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121421772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.630186    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121432873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.631124    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121442174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0514 00:18:10.631172    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121474076Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0514 00:18:10.631172    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121485477Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0514 00:18:10.631252    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121493977Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0514 00:18:10.631299    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121504178Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0514 00:18:10.631332    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121548581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0514 00:18:10.631412    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121558382Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0514 00:18:10.631459    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121570783Z" level=info msg="NRI interface is disabled by configuration."
	I0514 00:18:10.631459    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121732894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0514 00:18:10.631491    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121765696Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0514 00:18:10.631540    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121795498Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0514 00:18:10.631580    4316 command_runner.go:130] > May 14 00:16:43 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:43.121808099Z" level=info msg="containerd successfully booted in 0.031442s"
	I0514 00:18:10.631626    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.110784113Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0514 00:18:10.631626    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.142577516Z" level=info msg="Loading containers: start."
	I0514 00:18:10.631658    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.405628939Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0514 00:18:10.631658    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.480865351Z" level=info msg="Loading containers: done."
	I0514 00:18:10.631709    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503621028Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0514 00:18:10.631741    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.503703734Z" level=info msg="Daemon has completed initialization"
	I0514 00:18:10.631741    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545253312Z" level=info msg="API listen on /var/run/docker.sock"
	I0514 00:18:10.631782    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 dockerd[1049]: time="2024-05-14T00:16:44.545312016Z" level=info msg="API listen on [::]:2376"
	I0514 00:18:10.631782    4316 command_runner.go:130] > May 14 00:16:44 multinode-101100 systemd[1]: Started Docker Application Container Engine.
	I0514 00:18:10.631814    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0514 00:18:10.631814    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0514 00:18:10.631855    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0514 00:18:10.631887    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start docker client with request timeout 0s"
	I0514 00:18:10.631887    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0514 00:18:10.631929    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Loaded network plugin cni"
	I0514 00:18:10.631929    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:45Z" level=info msg="Start cri-dockerd grpc backend"
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:45 multinode-101100 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-xqj6w_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2\""
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-4kmx4_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697\""
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717439407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717535614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.717551915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.718214261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720663031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.720923549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721017455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.721295774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783128658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783344773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783450280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.783657895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816093342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816151946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816166547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:50.816251853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.631961    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ddcaadef980aca40a7740fe7c59949c3cb803d9fb441eca155b02162f3422bb8/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633051    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/659643d47b9ae231a8b97d9871cab6dfac5f6d06e647c919d14170832ee47683/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633090    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633104    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633104    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.258935521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633169    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.259980593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633208    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260187008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633208    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.260361520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633270    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272553064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633270    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272771779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633312    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272798781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633342    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.272907589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633342    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314782590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633382    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314905098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633412    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.314946601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633451    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.315263523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633480    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.385829312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633480    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386016625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633557    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386135333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633922    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:51.386495758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633922    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:55Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0514 00:18:10.633964    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444453862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444531867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444549969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.444647976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.461909471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462106685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462142187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.462265196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492511091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.492965923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493135035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.493390352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8ac60a565998ca52581e38272f2fcdb5f7038023f93d728cd74f5b89f5593ed/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/468a0e2976ae45a571a99afabfcd1329c76873e973179fe56cc9ef46e2533698/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849392115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849539826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849623331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.849861048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857219658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857468675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.633991    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.857687390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.634517    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:56.858016113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.634517    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:16:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5233e076edceb93931d756579982e556959dfd31508760da215a8407dca14e56/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.634547    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218178264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.634616    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218325574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.634616    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218348976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.634661    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 dockerd[1055]: time="2024-05-14T00:16:57.218459383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.634691    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:17.430189771Z" level=info msg="ignoring event" container=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:10.634942    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431460316Z" level=info msg="shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:10.634988    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.431869631Z" level=warning msg="cleaning up after shim disconnected" id=b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7 namespace=moby
	I0514 00:18:10.635020    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:17.432007736Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:10.635061    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1049]: time="2024-05-14T00:17:27.281698284Z" level=info msg="ignoring event" container=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0514 00:18:10.635093    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.282877145Z" level=info msg="shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:10.635143    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283000451Z" level=warning msg="cleaning up after shim disconnected" id=b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc namespace=moby
	I0514 00:18:10.635175    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:27.283015352Z" level=info msg="cleaning up dead shim" namespace=moby
	I0514 00:18:10.635258    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.098999177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.635258    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099271791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.635258    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099326694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.635784    4316 command_runner.go:130] > May 14 00:17:28 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:28.099641511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.635824    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.092603581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.635824    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093732951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.635824    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.093768053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.635875    4316 command_runner.go:130] > May 14 00:17:40 multinode-101100 dockerd[1055]: time="2024-05-14T00:17:40.095427255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.635915    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235051362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.635955    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235156269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.635955    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235169170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.635994    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235258576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636036    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235645702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.636068    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235713507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.636110    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235730808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636141    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.235828014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636141    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d/resolv.conf as [nameserver 172.23.96.1]"
	I0514 00:18:10.636214    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 cri-dockerd[1276]: time="2024-05-14T00:18:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0514 00:18:10.636285    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743900500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.636827    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.743970305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.636860    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.744406335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636899    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.745139484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636930    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808545660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0514 00:18:10.636930    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808756974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0514 00:18:10.636998    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.808962988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.636998    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 dockerd[1055]: time="2024-05-14T00:18:00.809189903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0514 00:18:10.637036    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637066    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637066    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637111    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637142    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637190    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637190    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637259    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637259    4316 command_runner.go:130] > May 14 00:18:03 multinode-101100 dockerd[1049]: 2024/05/14 00:18:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637289    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637338    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637368    4316 command_runner.go:130] > May 14 00:18:04 multinode-101100 dockerd[1049]: 2024/05/14 00:18:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637737    4316 command_runner.go:130] > May 14 00:18:06 multinode-101100 dockerd[1049]: 2024/05/14 00:18:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637737    4316 command_runner.go:130] > May 14 00:18:06 multinode-101100 dockerd[1049]: 2024/05/14 00:18:06 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637779    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.637804    4316 command_runner.go:130] > May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0514 00:18:10.668115    4316 logs.go:123] Gathering logs for kubelet ...
	I0514 00:18:10.668115    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507609    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.507660    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: I0514 00:16:46.508230    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 kubelet[1385]: E0514 00:16:46.508906    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:46 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229791    1441 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.229941    1441 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: I0514 00:16:47.230764    1441 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 kubelet[1441]: E0514 00:16:47.231303    1441 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:47 multinode-101100 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717000    1520 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717452    1520 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.717850    1520 server.go:927] "Client rotation is on, will bootstrap in background"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.719747    1520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.734764    1520 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754342    1520 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.754443    1520 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755707    1520 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0514 00:18:10.697105    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.755788    1520 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-101100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0514 00:18:10.697636    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756671    1520 topology_manager.go:138] "Creating topology manager with none policy"
	I0514 00:18:10.697674    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.756747    1520 container_manager_linux.go:301] "Creating device plugin manager"
	I0514 00:18:10.697674    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.757344    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:10.697674    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.758885    1520 kubelet.go:400] "Attempting to sync node with API server"
	I0514 00:18:10.697721    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759591    1520 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0514 00:18:10.697750    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.759727    1520 kubelet.go:312] "Adding apiserver pod source"
	I0514 00:18:10.697750    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.760630    1520 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0514 00:18:10.697776    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.765370    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.697831    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.765512    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.697857    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.767039    1520 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0514 00:18:10.697857    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.771297    1520 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0514 00:18:10.697895    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.771834    1520 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0514 00:18:10.697895    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.773545    1520 server.go:1264] "Started kubelet"
	I0514 00:18:10.697925    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.773829    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.697964    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.774013    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.697994    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.780360    1520 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.102.122:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-101100.17cf32c62bf0274b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-101100,UID:multinode-101100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-101100,},FirstTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,LastTimestamp:2024-05-14 00:16:49.773520715 +0000 UTC m=+0.124549330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-1
01100,}"
	I0514 00:18:10.698042    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.781297    1520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0514 00:18:10.698077    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.786484    1520 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0514 00:18:10.698109    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.787784    1520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0514 00:18:10.698109    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.792005    1520 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0514 00:18:10.698146    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.800317    1520 server.go:455] "Adding debug handlers to kubelet server"
	I0514 00:18:10.698146    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805202    1520 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0514 00:18:10.698179    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.805290    1520 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0514 00:18:10.698179    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812186    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="200ms"
	I0514 00:18:10.698216    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.812333    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.698279    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.812369    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.698279    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816781    1520 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0514 00:18:10.698319    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816881    1520 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0514 00:18:10.698319    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.816892    1520 factory.go:221] Registration of the systemd container factory successfully
	I0514 00:18:10.698366    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849206    1520 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0514 00:18:10.698366    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849426    1520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0514 00:18:10.698366    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.849585    1520 state_mem.go:36] "Initialized new in-memory state store"
	I0514 00:18:10.698406    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850764    1520 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0514 00:18:10.698406    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850799    1520 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0514 00:18:10.698406    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.850826    1520 policy_none.go:49] "None policy: Start"
	I0514 00:18:10.698455    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.855604    1520 reconciler.go:26] "Reconciler: start to sync state"
	I0514 00:18:10.698455    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884024    1520 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0514 00:18:10.698494    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.884165    1520 state_mem.go:35] "Initializing new in-memory state store"
	I0514 00:18:10.698494    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.886215    1520 state_mem.go:75] "Updated machine memory state"
	I0514 00:18:10.698494    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888657    1520 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0514 00:18:10.698494    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.888839    1520 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0514 00:18:10.698547    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.891306    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0514 00:18:10.698584    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.897961    1520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0514 00:18:10.698584    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898040    1520 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0514 00:18:10.698613    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898088    1520 kubelet.go:2337] "Starting kubelet main sync loop"
	I0514 00:18:10.698613    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.898127    1520 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0514 00:18:10.698648    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.898551    1520 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0514 00:18:10.698681    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.899218    1520 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-101100\" not found"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: W0514 00:16:49.900215    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.900324    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.907443    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.909152    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: E0514 00:16:49.912132    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999139    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7c140951f4f8270da243f55135e9f108f3cdf5ef11a4e990e06822ace5adbd"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999762    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d7537422a83c9a57ab3bed978e87441e2725a75ebc91f5cad3319d11d4ea18"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:49 multinode-101100 kubelet[1520]: I0514 00:16:49.999846    1520 topology_manager.go:215] "Topology Admit Handler" podUID="378d61cf78af695f1df41e321907a84d" podNamespace="kube-system" podName="kube-apiserver-multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.000880    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5393de2704b2efef461d22fa52aa93c8" podNamespace="kube-system" podName="kube-controller-manager-multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.002201    1520 topology_manager.go:215] "Topology Admit Handler" podUID="8083abd658221f47cabf81a00c4ca98e" podNamespace="kube-system" podName="kube-scheduler-multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.004707    1520 topology_manager.go:215] "Topology Admit Handler" podUID="62d8afc7714e8ab65bff9675d120bb67" podNamespace="kube-system" podName="etcd-multinode-101100"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007687    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb3b27edcd2a44b67fad4a74f438a62eec78b20422f6f952396053574dfb97e"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007796    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da9268fd6556bae4d0109c5065588160bcf737c35e1e5df738d31786425c22ff"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007891    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd694480978f356b61313108a6ff716a8d5f6e854fea1e4aa89a76a68d049f0"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.007938    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287e744a4dc2e511f4e40696c7d3b4193896c0c40a5bb527e569d1d3ec2cb908"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.013966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad0550a5dabf16106fc2956251a65bccdc32f3f3be1f27246f675964fd548a1f"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.014759    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="400ms"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.031437    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d1b8ce19aba5b210540936b7a4b3d885cf4632a985872e3cf05d6cea2e0ca2"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.048649    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb49b28c842af421711ef939d018058baa07a32bbcdc98976511d4800986697"
	I0514 00:18:10.698709    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074775    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-ca-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:10.699268    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074859    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:10.699268    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074906    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-k8s-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.699348    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074943    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.699348    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.074981    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-certs\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:10.699398    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075015    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/62d8afc7714e8ab65bff9675d120bb67-etcd-data\") pod \"etcd-multinode-101100\" (UID: \"62d8afc7714e8ab65bff9675d120bb67\") " pod="kube-system/etcd-multinode-101100"
	I0514 00:18:10.699427    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075045    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/378d61cf78af695f1df41e321907a84d-k8s-certs\") pod \"kube-apiserver-multinode-101100\" (UID: \"378d61cf78af695f1df41e321907a84d\") " pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:10.699427    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075248    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-ca-certs\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.699484    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075285    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-flexvolume-dir\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.699514    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075316    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5393de2704b2efef461d22fa52aa93c8-kubeconfig\") pod \"kube-controller-manager-multinode-101100\" (UID: \"5393de2704b2efef461d22fa52aa93c8\") " pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.699574    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.075345    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8083abd658221f47cabf81a00c4ca98e-kubeconfig\") pod \"kube-scheduler-multinode-101100\" (UID: \"8083abd658221f47cabf81a00c4ca98e\") " pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:10.699574    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.111262    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.112979    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.416229    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="800ms"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: I0514 00:16:50.515338    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.516940    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: W0514 00:16:50.730920    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:50 multinode-101100 kubelet[1520]: E0514 00:16:50.730993    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.074200    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.074270    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.076835    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419648c0d4053fc49953367496f1dbfe0fc7ce631e09569d18f5031a7c94053b"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.081775    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.081938    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-101100&limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.108133    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509b8407e0955daa05e6418b83790728e61d0bd72fecdd814c8e92ae9e80d3a3"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.218458    1520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-101100?timeout=10s\": dial tcp 172.23.102.122:8443: connect: connection refused" interval="1.6s"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: I0514 00:16:51.318715    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:10.699600    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.319804    1520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.102.122:8443: connect: connection refused" node="multinode-101100"
	I0514 00:18:10.700124    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: W0514 00:16:51.367337    1520 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.700163    4316 command_runner.go:130] > May 14 00:16:51 multinode-101100 kubelet[1520]: E0514 00:16:51.367409    1520 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.102.122:8443: connect: connection refused
	I0514 00:18:10.700163    4316 command_runner.go:130] > May 14 00:16:52 multinode-101100 kubelet[1520]: I0514 00:16:52.921237    1520 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101100"
	I0514 00:18:10.700211    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086028    1520 kubelet_node_status.go:112] "Node was previously registered" node="multinode-101100"
	I0514 00:18:10.700211    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.086698    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-101100\" already exists" pod="kube-system/kube-controller-manager-multinode-101100"
	I0514 00:18:10.700251    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.086743    1520 kubelet_node_status.go:76] "Successfully registered node" node="multinode-101100"
	I0514 00:18:10.700251    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.088971    1520 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0514 00:18:10.700299    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.090614    1520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0514 00:18:10.700299    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.091996    1520 setters.go:580] "Node became not ready" node="multinode-101100" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-14T00:16:55Z","lastTransitionTime":"2024-05-14T00:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0514 00:18:10.700339    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.783435    1520 apiserver.go:52] "Watching apiserver"
	I0514 00:18:10.700339    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788503    1520 topology_manager.go:215] "Topology Admit Handler" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4kmx4"
	I0514 00:18:10.700387    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788795    1520 topology_manager.go:215] "Topology Admit Handler" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0" podNamespace="kube-system" podName="kindnet-9q2tv"
	I0514 00:18:10.700387    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.788932    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a9a488af-41ba-47f3-87b0-5a2f062afad6" podNamespace="kube-system" podName="kube-proxy-zhcz6"
	I0514 00:18:10.700427    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789028    1520 topology_manager.go:215] "Topology Admit Handler" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247" podNamespace="kube-system" podName="storage-provisioner"
	I0514 00:18:10.700427    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789184    1520 topology_manager.go:215] "Topology Admit Handler" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae" podNamespace="default" podName="busybox-fc5497c4f-xqj6w"
	I0514 00:18:10.700515    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.789553    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.700515    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.789850    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-101100" podUID="1d9c79a4-1e4a-46fb-b3e8-02a4775f40af"
	I0514 00:18:10.700562    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.790329    1520 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-101100" podUID="cd31d030-75f8-4abb-bcad-34031cec7aa6"
	I0514 00:18:10.700602    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.794088    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.700602    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.798934    1520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-multinode-101100\" already exists" pod="kube-system/kube-scheduler-multinode-101100"
	I0514 00:18:10.700650    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.809466    1520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0514 00:18:10.700650    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.835196    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-101100"
	I0514 00:18:10.700689    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857783    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-cni-cfg\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:10.700736    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857845    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-xtables-lock\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:10.700776    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857866    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-xtables-lock\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:10.700824    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.857954    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b3ee167-f21f-46b3-bace-03a7233717e0-lib-modules\") pod \"kindnet-9q2tv\" (UID: \"5b3ee167-f21f-46b3-bace-03a7233717e0\") " pod="kube-system/kindnet-9q2tv"
	I0514 00:18:10.700824    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858020    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a92f04b8-a93f-42d8-81d7-d4da6bf2e247-tmp\") pod \"storage-provisioner\" (UID: \"a92f04b8-a93f-42d8-81d7-d4da6bf2e247\") " pod="kube-system/storage-provisioner"
	I0514 00:18:10.700866    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.858051    1520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9a488af-41ba-47f3-87b0-5a2f062afad6-lib-modules\") pod \"kube-proxy-zhcz6\" (UID: \"a9a488af-41ba-47f3-87b0-5a2f062afad6\") " pod="kube-system/kube-proxy-zhcz6"
	I0514 00:18:10.700866    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859176    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.700953    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.859325    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.359260421 +0000 UTC m=+6.710289036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.701000    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.873841    1520 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-101100"
	I0514 00:18:10.701000    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.907826    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d9b35578220c9e99f77722d9aa294f" path="/var/lib/kubelet/pods/03d9b35578220c9e99f77722d9aa294f/volumes"
	I0514 00:18:10.701040    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.910490    1520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af4b764a5249ff25d3c1c709387c273" path="/var/lib/kubelet/pods/1af4b764a5249ff25d3c1c709387c273/volumes"
	I0514 00:18:10.701040    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917375    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701087    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917415    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701126    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: E0514 00:16:55.917466    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:56.417450852 +0000 UTC m=+6.768479567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701213    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.964380    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-101100" podStartSLOduration=0.9643304 podStartE2EDuration="964.3304ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.964174289 +0000 UTC m=+6.315203004" watchObservedRunningTime="2024-05-14 00:16:55.9643304 +0000 UTC m=+6.315359015"
	I0514 00:18:10.701260    4316 command_runner.go:130] > May 14 00:16:55 multinode-101100 kubelet[1520]: I0514 00:16:55.985118    1520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-101100" podStartSLOduration=0.985100539 podStartE2EDuration="985.100539ms" podCreationTimestamp="2024-05-14 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-14 00:16:55.984806519 +0000 UTC m=+6.335835134" watchObservedRunningTime="2024-05-14 00:16:55.985100539 +0000 UTC m=+6.336129154"
	I0514 00:18:10.701260    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.362973    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.701301    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.363041    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.363025821 +0000 UTC m=+7.714054436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.701348    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463836    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701398    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463868    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701443    4316 command_runner.go:130] > May 14 00:16:56 multinode-101100 kubelet[1520]: E0514 00:16:56.463923    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:57.46390701 +0000 UTC m=+7.814935725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701443    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.377986    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.701480    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.378101    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.378049439 +0000 UTC m=+9.729078054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.701525    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478290    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701562    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478356    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701607    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.478448    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:16:59.478431994 +0000 UTC m=+9.829460709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701644    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899119    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.701690    4316 command_runner.go:130] > May 14 00:16:57 multinode-101100 kubelet[1520]: E0514 00:16:57.899678    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.701690    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.394980    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.701728    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.395173    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.39515828 +0000 UTC m=+13.746186895 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.701772    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496260    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701772    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496313    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701809    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.496438    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:03.496350091 +0000 UTC m=+13.847378806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.701891    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.891391    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.701891    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.901591    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.701937    4316 command_runner.go:130] > May 14 00:16:59 multinode-101100 kubelet[1520]: E0514 00:16:59.914896    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.701974    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.898894    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702019    4316 command_runner.go:130] > May 14 00:17:01 multinode-101100 kubelet[1520]: E0514 00:17:01.899345    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702019    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445887    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.702056    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.445965    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.44595071 +0000 UTC m=+21.796979425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.702101    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547258    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702101    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547292    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702182    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.547346    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:11.547331033 +0000 UTC m=+21.898359648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702220    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.899515    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702220    4316 command_runner.go:130] > May 14 00:17:03 multinode-101100 kubelet[1520]: E0514 00:17:03.900290    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702265    4316 command_runner.go:130] > May 14 00:17:04 multinode-101100 kubelet[1520]: E0514 00:17:04.893282    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.702302    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900260    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702347    4316 command_runner.go:130] > May 14 00:17:05 multinode-101100 kubelet[1520]: E0514 00:17:05.900651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702383    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899212    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702429    4316 command_runner.go:130] > May 14 00:17:07 multinode-101100 kubelet[1520]: E0514 00:17:07.899658    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702465    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.895008    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.702465    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899381    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702512    4316 command_runner.go:130] > May 14 00:17:09 multinode-101100 kubelet[1520]: E0514 00:17:09.899884    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702549    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508629    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.702593    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.508833    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.508813455 +0000 UTC m=+37.859842170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.702629    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609334    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702629    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609455    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702710    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.609579    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:27.609562102 +0000 UTC m=+37.960590817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.702779    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899431    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702779    4316 command_runner.go:130] > May 14 00:17:11 multinode-101100 kubelet[1520]: E0514 00:17:11.899749    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702850    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.898578    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702850    4316 command_runner.go:130] > May 14 00:17:13 multinode-101100 kubelet[1520]: E0514 00:17:13.899676    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:14 multinode-101100 kubelet[1520]: E0514 00:17:14.897029    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.899665    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:15 multinode-101100 kubelet[1520]: E0514 00:17:15.900476    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.766386    1520 scope.go:117] "RemoveContainer" containerID="9c4eb727cedb65853cc3a94fdcc3e267ed41cd9cb15ef1cc1bb84f6f2278c9c4"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: I0514 00:17:17.767364    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.767901    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kindnet-cni pod=kindnet-9q2tv_kube-system(5b3ee167-f21f-46b3-bace-03a7233717e0)\"" pod="kube-system/kindnet-9q2tv" podUID="5b3ee167-f21f-46b3-bace-03a7233717e0"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.898891    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:17 multinode-101100 kubelet[1520]: E0514 00:17:17.899300    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.898102    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899045    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:19 multinode-101100 kubelet[1520]: E0514 00:17:19.899315    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900488    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:21 multinode-101100 kubelet[1520]: E0514 00:17:21.900677    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899091    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:23 multinode-101100 kubelet[1520]: E0514 00:17:23.899625    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:24 multinode-101100 kubelet[1520]: E0514 00:17:24.899382    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.702919    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900463    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.703445    4316 command_runner.go:130] > May 14 00:17:25 multinode-101100 kubelet[1520]: E0514 00:17:25.900948    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.703483    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550622    1520 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.550839    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume podName:06858a47-f51b-48d8-a2a6-f60b8107be13 nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.550821042 +0000 UTC m=+69.901849657 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06858a47-f51b-48d8-a2a6-f60b8107be13-config-volume") pod "coredns-7db6d8ff4d-4kmx4" (UID: "06858a47-f51b-48d8-a2a6-f60b8107be13") : object "kube-system"/"coredns" not registered
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651942    1520 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.651988    1520 projected.go:200] Error preparing data for projected volume kube-api-access-jwkj4 for pod default/busybox-fc5497c4f-xqj6w: object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.652038    1520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4 podName:106df673-68ba-43dd-8a94-1e41aeb3cfae nodeName:}" failed. No retries permitted until 2024-05-14 00:17:59.652024653 +0000 UTC m=+70.003053368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwkj4" (UniqueName: "kubernetes.io/projected/106df673-68ba-43dd-8a94-1e41aeb3cfae-kube-api-access-jwkj4") pod "busybox-fc5497c4f-xqj6w" (UID: "106df673-68ba-43dd-8a94-1e41aeb3cfae") : object "default"/"kube-root-ca.crt" not registered
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.900302    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.901190    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.901408    1520 scope.go:117] "RemoveContainer" containerID="b7d8d9a5e5eaf63475bf52ee7c07044c00fefffda7179abac17b9ed6a9e189e7"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.913749    1520 scope.go:117] "RemoveContainer" containerID="e6ee22ee5c1b88cb0b1190c646094aefe229bfbd4486f007cde2b36da39ca886"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: I0514 00:17:27.914050    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:27 multinode-101100 kubelet[1520]: E0514 00:17:27.914651    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a92f04b8-a93f-42d8-81d7-d4da6bf2e247)\"" pod="kube-system/storage-provisioner" podUID="a92f04b8-a93f-42d8-81d7-d4da6bf2e247"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.898652    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.899154    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:29 multinode-101100 kubelet[1520]: E0514 00:17:29.900744    1520 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.900407    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.703562    4316 command_runner.go:130] > May 14 00:17:31 multinode-101100 kubelet[1520]: E0514 00:17:31.902295    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.704086    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.898560    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-4kmx4" podUID="06858a47-f51b-48d8-a2a6-f60b8107be13"
	I0514 00:18:10.704124    4316 command_runner.go:130] > May 14 00:17:33 multinode-101100 kubelet[1520]: E0514 00:17:33.899627    1520 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-xqj6w" podUID="106df673-68ba-43dd-8a94-1e41aeb3cfae"
	I0514 00:18:10.704158    4316 command_runner.go:130] > May 14 00:17:39 multinode-101100 kubelet[1520]: I0514 00:17:39.899892    1520 scope.go:117] "RemoveContainer" containerID="b142687b621f17a456a4a451c0a362cd4b0ba94d79158b540e46ca40605a9afc"
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.888753    1520 scope.go:117] "RemoveContainer" containerID="eda79d47d28ffbc726bec7eaad072eeebb31ec439ed9bbe9fd544b9913b8f3ea"
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: E0514 00:17:49.924547    1520 iptables.go:577] "Could not set up iptables canary" err=<
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:17:49 multinode-101100 kubelet[1520]: I0514 00:17:49.932695    1520 scope.go:117] "RemoveContainer" containerID="06f1a683cad8348fc4f8e339f226bbda12c4e8c1025c7acb52e2792253dd3008"
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.478966    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cccb5e8cee3b173bd49a88aee4239ccc8bc11a3a166316e92f3a9abce9b252d"
	I0514 00:18:10.704190    4316 command_runner.go:130] > May 14 00:18:00 multinode-101100 kubelet[1520]: I0514 00:18:00.543407    1520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb9b6d6d0915742a78c054211d49332a04beb4875f8a8f80cc4131b2a11aa2d"
	I0514 00:18:10.742680    4316 logs.go:123] Gathering logs for dmesg ...
	I0514 00:18:10.742680    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0514 00:18:10.762337    4316 command_runner.go:130] > [May14 00:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.104207] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.023601] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.058832] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0514 00:18:10.762337    4316 command_runner.go:130] > [  +0.024495] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0514 00:18:10.762896    4316 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0514 00:18:10.762896    4316 command_runner.go:130] > [  +5.692465] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0514 00:18:10.762896    4316 command_runner.go:130] > [  +0.707713] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0514 00:18:10.762933    4316 command_runner.go:130] > [  +1.789899] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0514 00:18:10.762933    4316 command_runner.go:130] > [  +7.282690] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0514 00:18:10.762933    4316 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0514 00:18:10.762933    4316 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0514 00:18:10.762933    4316 command_runner.go:130] > [May14 00:16] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	I0514 00:18:10.762933    4316 command_runner.go:130] > [  +0.158382] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	I0514 00:18:10.762989    4316 command_runner.go:130] > [ +23.750429] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	I0514 00:18:10.762989    4316 command_runner.go:130] > [  +0.111929] kauditd_printk_skb: 73 callbacks suppressed
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +0.464883] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +0.164872] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +0.194348] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +2.832176] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +0.181315] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	I0514 00:18:10.763019    4316 command_runner.go:130] > [  +0.160798] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	I0514 00:18:10.763163    4316 command_runner.go:130] > [  +0.238904] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +0.787359] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +0.085936] kauditd_printk_skb: 205 callbacks suppressed
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +3.384697] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +1.802132] kauditd_printk_skb: 64 callbacks suppressed
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +5.213940] kauditd_printk_skb: 10 callbacks suppressed
	I0514 00:18:10.763200    4316 command_runner.go:130] > [  +3.471694] systemd-fstab-generator[2315]: Ignoring "noauto" option for root device
	I0514 00:18:10.763200    4316 command_runner.go:130] > [May14 00:17] kauditd_printk_skb: 70 callbacks suppressed
	I0514 00:18:10.765058    4316 logs.go:123] Gathering logs for kube-apiserver [da9e6534cd87] ...
	I0514 00:18:10.765058    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e6534cd87"
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.020111       1 options.go:221] external host was not specified, using 172.23.102.122
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.031119       1 server.go:148] Version: v1.30.0
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.031201       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.560170       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.562027       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.567323       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.562214       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:52.570134       1 instance.go:299] Using reconciler: lease
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:53.544464       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:53.544866       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:53.780904       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:53.781233       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.015006       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.172205       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.186014       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.186188       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.186609       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.187573       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.187695       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.188811       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.190200       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.190309       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.190366       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.192283       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.192583       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0514 00:18:10.790208    4316 command_runner.go:130] ! I0514 00:16:54.193726       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0514 00:18:10.790208    4316 command_runner.go:130] ! W0514 00:16:54.193833       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.193842       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.194656       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.194769       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.194831       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.195773       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.200522       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.200808       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.201073       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.202173       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.202352       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.202465       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.204036       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.204232       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.213708       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.213869       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.213992       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.214976       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.215217       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.215317       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.226860       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.227134       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.227258       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.230259       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.232567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.232734       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.232824       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.239186       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.239294       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.239304       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.241605       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.241703       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.241712       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.242373       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.242466       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.259244       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0514 00:18:10.790785    4316 command_runner.go:130] ! W0514 00:16:54.259536       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0514 00:18:10.790785    4316 command_runner.go:130] ! I0514 00:16:54.792225       1 secure_serving.go:213] Serving securely on [::]:8443
	I0514 00:18:10.791303    4316 command_runner.go:130] ! I0514 00:16:54.792432       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.794552       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.794677       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.794720       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.795157       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.795787       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.795995       1 controller.go:116] Starting legacy_token_tracking_controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.796042       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.796156       1 controller.go:78] Starting OpenAPI AggregationController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.796272       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.797969       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.798688       1 available_controller.go:423] Starting AvailableConditionController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.798701       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.799424       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.799667       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.799692       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.800971       1 aggregator.go:163] waiting for initial CRD sync...
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.792447       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.792459       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.792473       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812587       1 controller.go:139] Starting OpenAPI controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812611       1 controller.go:87] Starting OpenAPI V3 controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812626       1 naming_controller.go:291] Starting NamingConditionController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812640       1 establishing_controller.go:76] Starting EstablishingController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812660       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812674       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.812685       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.848957       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.849152       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.850275       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.850299       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.906495       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.938841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.950730       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.950897       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.951294       1 aggregator.go:165] initial CRD sync complete...
	I0514 00:18:10.791452    4316 command_runner.go:130] ! I0514 00:16:54.951545       1 autoregister_controller.go:141] Starting autoregister controller
	I0514 00:18:10.791967    4316 command_runner.go:130] ! I0514 00:16:54.951793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0514 00:18:10.791967    4316 command_runner.go:130] ! I0514 00:16:54.951875       1 cache.go:39] Caches are synced for autoregister controller
	I0514 00:18:10.792057    4316 command_runner.go:130] ! I0514 00:16:54.962299       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0514 00:18:10.792232    4316 command_runner.go:130] ! I0514 00:16:54.968027       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:18:10.792323    4316 command_runner.go:130] ! I0514 00:16:54.968302       1 policy_source.go:224] refreshing policies
	I0514 00:18:10.792414    4316 command_runner.go:130] ! I0514 00:16:54.997391       1 shared_informer.go:320] Caches are synced for configmaps
	I0514 00:18:10.792504    4316 command_runner.go:130] ! I0514 00:16:54.999391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 00:18:10.792594    4316 command_runner.go:130] ! I0514 00:16:54.999732       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0514 00:18:10.792682    4316 command_runner.go:130] ! I0514 00:16:54.999871       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0514 00:18:10.792772    4316 command_runner.go:130] ! I0514 00:16:55.037244       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 00:18:10.792861    4316 command_runner.go:130] ! I0514 00:16:55.824524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0514 00:18:10.792951    4316 command_runner.go:130] ! W0514 00:16:56.521956       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122 172.23.106.39]
	I0514 00:18:10.793042    4316 command_runner.go:130] ! I0514 00:16:56.523614       1 controller.go:615] quota admission added evaluator for: endpoints
	I0514 00:18:10.793132    4316 command_runner.go:130] ! I0514 00:16:56.536716       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 00:18:10.793223    4316 command_runner.go:130] ! I0514 00:16:57.861026       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0514 00:18:10.793314    4316 command_runner.go:130] ! I0514 00:16:58.068043       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0514 00:18:10.793404    4316 command_runner.go:130] ! I0514 00:16:58.085925       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0514 00:18:10.793494    4316 command_runner.go:130] ! I0514 00:16:58.189328       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 00:18:10.793581    4316 command_runner.go:130] ! I0514 00:16:58.200849       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0514 00:18:10.793711    4316 command_runner.go:130] ! W0514 00:17:16.528300       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122]
	I0514 00:18:10.800185    4316 logs.go:123] Gathering logs for kube-scheduler [d3581c1c570c] ...
	I0514 00:18:10.800185    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3581c1c570c"
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:52.716401       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:10.823372    4316 command_runner.go:130] ! W0514 00:16:54.858727       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0514 00:18:10.823372    4316 command_runner.go:130] ! W0514 00:16:54.858778       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0514 00:18:10.823372    4316 command_runner.go:130] ! W0514 00:16:54.858790       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0514 00:18:10.823372    4316 command_runner.go:130] ! W0514 00:16:54.858800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.945438       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.945867       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.953986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.957180       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.957284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:54.957493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:10.823372    4316 command_runner.go:130] ! I0514 00:16:55.058381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:18:10.825653    4316 logs.go:123] Gathering logs for etcd [08450c853590] ...
	I0514 00:18:10.825691    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08450c853590"
	I0514 00:18:10.856035    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.687231Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.691397Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.102.122:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.102.122:2380","--initial-cluster=multinode-101100=https://172.23.102.122:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.102.122:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.102.122:2380","--name=multinode-101100","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.692425Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.693634Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.693771Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.102.122:2380"]}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.694117Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:10.856484    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.703219Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"]}
	I0514 00:18:10.857021    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.704312Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-101100","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0514 00:18:10.857021    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.7264Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"19.905879ms"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.748539Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.766395Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","commit-index":1898}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=()"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.767611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became follower at term 2"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.768086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6e4c15c3d0f3380f [peers: [], term: 2, commit: 1898, applied: 0, lastindex: 1898, lastterm: 2]"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"warn","ts":"2024-05-14T00:16:51.782157Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.786938Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1096}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.797876Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1653}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.80426Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.81216Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6e4c15c3d0f3380f","timeout":"7s"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.813213Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6e4c15c3d0f3380f"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.814234Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"6e4c15c3d0f3380f","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.815302Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816695Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0514 00:18:10.857091    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816877Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0514 00:18:10.857636    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.816978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0514 00:18:10.857636    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=(7947751373170489359)"}
	I0514 00:18:10.857726    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817687Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","added-peer-id":"6e4c15c3d0f3380f","added-peer-peer-urls":["https://172.23.106.39:2380"]}
	I0514 00:18:10.857770    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.817911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","cluster-version":"3.5"}
	I0514 00:18:10.857770    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.818648Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0514 00:18:10.857770    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.83299Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0514 00:18:10.857944    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.834951Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6e4c15c3d0f3380f","initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0514 00:18:10.857944    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835138Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0514 00:18:10.857944    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835469Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.102.122:2380"}
	I0514 00:18:10.858045    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:51.835603Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.102.122:2380"}
	I0514 00:18:10.858045    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.468953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f is starting a new election at term 2"}
	I0514 00:18:10.858045    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became pre-candidate at term 2"}
	I0514 00:18:10.858045    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgPreVoteResp from 6e4c15c3d0f3380f at term 2"}
	I0514 00:18:10.858167    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became candidate at term 3"}
	I0514 00:18:10.858167    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgVoteResp from 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:10.858167    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became leader at term 3"}
	I0514 00:18:10.858277    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.469259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6e4c15c3d0f3380f elected leader 6e4c15c3d0f3380f at term 3"}
	I0514 00:18:10.858373    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479025Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6e4c15c3d0f3380f","local-member-attributes":"{Name:multinode-101100 ClientURLs:[https://172.23.102.122:2379]}","request-path":"/0/members/6e4c15c3d0f3380f/attributes","cluster-id":"bb849d1df0b559d7","publish-timeout":"7s"}
	I0514 00:18:10.858426    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:10.858458    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.479642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0514 00:18:10.858458    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0514 00:18:10.858458    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.481353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0514 00:18:10.858458    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.102.122:2379"}
	I0514 00:18:10.858565    4316 command_runner.go:130] ! {"level":"info","ts":"2024-05-14T00:16:53.483616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0514 00:18:10.864013    4316 logs.go:123] Gathering logs for coredns [dcc5a109288b] ...
	I0514 00:18:10.864544    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc5a109288b"
	I0514 00:18:10.892128    4316 command_runner.go:130] > .:53
	I0514 00:18:10.892190    4316 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	I0514 00:18:10.892190    4316 command_runner.go:130] > CoreDNS-1.11.1
	I0514 00:18:10.892190    4316 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0514 00:18:10.892190    4316 command_runner.go:130] > [INFO] 127.0.0.1:53257 - 27032 "HINFO IN 6976640239659908905.245956973392320689. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05278328s
	I0514 00:18:10.892190    4316 logs.go:123] Gathering logs for kube-controller-manager [b87239d1199a] ...
	I0514 00:18:10.892190    4316 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87239d1199a"
	I0514 00:18:10.918917    4316 command_runner.go:130] ! I0514 00:16:52.414723       1 serving.go:380] Generated self-signed cert in-memory
	I0514 00:18:10.918917    4316 command_runner.go:130] ! I0514 00:16:52.798318       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0514 00:18:10.918917    4316 command_runner.go:130] ! I0514 00:16:52.798456       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:18:10.919561    4316 command_runner.go:130] ! I0514 00:16:52.802364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:52.802939       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:52.803159       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:52.803510       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.867503       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.868219       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.874269       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.878308       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.878330       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.878409       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.878509       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.878517       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0514 00:18:10.919641    4316 command_runner.go:130] ! I0514 00:16:56.882632       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0514 00:18:10.920296    4316 command_runner.go:130] ! I0514 00:16:56.882648       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0514 00:18:10.920607    4316 command_runner.go:130] ! I0514 00:16:56.882656       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.883478       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.883488       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.883496       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.885766       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.888273       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0514 00:18:10.920871    4316 command_runner.go:130] ! I0514 00:16:56.888463       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0514 00:18:10.921563    4316 command_runner.go:130] ! I0514 00:16:56.889304       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.890244       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.890408       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.893619       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.903162       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.903183       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.969340       1 shared_informer.go:320] Caches are synced for tokens
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.982656       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.982729       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983268       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983299       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983354       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983426       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983451       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! W0514 00:16:56.983466       1 shared_informer.go:597] resyncPeriod 15h46m20.096782659s is smaller than resyncCheckPeriod 18h37m10.298700604s and the informer has already started. Changing it to 18h37m10.298700604s
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.983922       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.984377       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.984435       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.984460       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0514 00:18:10.921735    4316 command_runner.go:130] ! I0514 00:16:56.984478       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0514 00:18:10.922668    4316 command_runner.go:130] ! I0514 00:16:56.984528       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.984568       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.984736       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.985288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.995607       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.996188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.997004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.997141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.997174       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.997363       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:56.997373       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:57.003479       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:57.004086       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:57.004336       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:16:57.004348       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.031733       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.032143       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.032242       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.032648       1 shared_informer.go:313] Waiting for caches to sync for node
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.034995       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.035109       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.035510       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0514 00:18:10.922790    4316 command_runner.go:130] ! I0514 00:17:07.035544       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0514 00:18:10.923376    4316 command_runner.go:130] ! I0514 00:17:07.035551       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0514 00:18:10.923429    4316 command_runner.go:130] ! I0514 00:17:07.038183       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.038394       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.039212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.040784       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.041050       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.041194       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.043909       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.044044       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.044106       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.059101       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.059352       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.059503       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.062189       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.062615       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.062641       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.070971       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.071021       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.071151       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0514 00:18:10.923518    4316 command_runner.go:130] ! I0514 00:17:07.071293       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0514 00:18:10.924106    4316 command_runner.go:130] ! I0514 00:17:07.071328       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.071388       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.083342       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.084321       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.084474       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.085952       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.086347       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.086569       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.088414       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.088731       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.089444       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.091486       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.091650       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.091678       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.094570       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.095467       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.095818       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.097778       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.098911       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.098939       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.100648       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.101514       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.101659       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.103436       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0514 00:18:10.924143    4316 command_runner.go:130] ! I0514 00:17:07.103908       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.109194       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.109267       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.109496       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.113760       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.114024       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.114252       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.115259       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.116925       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.117254       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.117353       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.121368       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.121764       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.121788       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122156       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122248       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122301       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122371       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122432       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122464       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.924964    4316 command_runner.go:130] ! I0514 00:17:07.122706       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.925751    4316 command_runner.go:130] ! I0514 00:17:07.123282       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.123678       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.126535       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.126692       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0514 00:18:10.925783    4316 command_runner.go:130] ! E0514 00:17:07.165594       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.165634       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.218097       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.218271       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.218379       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.218721       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.265917       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.266033       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.266045       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.315398       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.315511       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.315534       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.415899       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.416022       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.465981       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.466026       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.466177       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.466545       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.516337       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0514 00:18:10.925783    4316 command_runner.go:130] ! I0514 00:17:07.516498       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.516515       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.567477       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.567616       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.567627       1 shared_informer.go:313] Waiting for caches to sync for job
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.617346       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.617464       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.617476       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0514 00:18:10.926610    4316 command_runner.go:130] ! E0514 00:17:07.665765       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.665865       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.665876       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.671623       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.693623       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.703208       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.707002       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100\" does not exist"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.707898       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.708010       1 shared_informer.go:320] Caches are synced for daemon sets
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.708168       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.710800       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.710879       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.716140       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.716709       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.717695       1 shared_informer.go:320] Caches are synced for cronjob
	I0514 00:18:10.926610    4316 command_runner.go:130] ! I0514 00:17:07.717710       1 shared_informer.go:320] Caches are synced for stateful set
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.718924       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.723267       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.723378       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.723467       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.723495       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.726980       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.733271       1 shared_informer.go:320] Caches are synced for node
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.733445       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.733467       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.733473       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.733480       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.739996       1 shared_informer.go:320] Caches are synced for expand
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.742032       1 shared_informer.go:320] Caches are synced for PV protection
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.744959       1 shared_informer.go:320] Caches are synced for ephemeral
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.760453       1 shared_informer.go:320] Caches are synced for namespace
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.762790       1 shared_informer.go:320] Caches are synced for service account
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.766175       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.767750       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.768151       1 shared_informer.go:320] Caches are synced for job
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.779225       1 shared_informer.go:320] Caches are synced for TTL
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.779406       1 shared_informer.go:320] Caches are synced for GC
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.784902       1 shared_informer.go:320] Caches are synced for HPA
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.787441       1 shared_informer.go:320] Caches are synced for persistent volume
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.790178       1 shared_informer.go:320] Caches are synced for PVC protection
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.791571       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.797318       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.816750       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.836762       1 shared_informer.go:320] Caches are synced for taint
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.837127       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.869081       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.869544       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.869413       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.870789       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.898670       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.901033       1 shared_informer.go:320] Caches are synced for deployment
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.904366       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.916125       1 shared_informer.go:320] Caches are synced for disruption
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.977330       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:07.988956       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0514 00:18:10.927398    4316 command_runner.go:130] ! I0514 00:17:08.134754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="230.307102ms"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.134896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.6µs"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.140785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="234.508146ms"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.140977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.3µs"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.412419       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.472034       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:08.472384       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:17:37.878702       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:18:01.608725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.75856ms"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:18:01.608844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.702µs"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:18:01.651304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.008µs"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:18:01.710123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.783088ms"
	I0514 00:18:10.928114    4316 command_runner.go:130] ! I0514 00:18:01.711762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.302µs"
	I0514 00:18:10.943483    4316 logs.go:123] Gathering logs for container status ...
	I0514 00:18:10.943483    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0514 00:18:11.012111    4316 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0514 00:18:11.012224    4316 command_runner.go:130] > 3d0b2f0362eb4       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   8cb9b6d6d0915       busybox-fc5497c4f-xqj6w
	I0514 00:18:11.012224    4316 command_runner.go:130] > dcc5a109288b6       cbb01a7bd410d                                                                                         11 seconds ago       Running             coredns                   1                   1cccb5e8cee3b       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:11.012224    4316 command_runner.go:130] > bde84ba2d4ed7       6e38f40d628db                                                                                         32 seconds ago       Running             storage-provisioner       2                   468a0e2976ae4       storage-provisioner
	I0514 00:18:11.012334    4316 command_runner.go:130] > 2b424a7cd98c8       4950bb10b3f87                                                                                         44 seconds ago       Running             kindnet-cni               2                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:11.012391    4316 command_runner.go:130] > b7d8d9a5e5eaf       4950bb10b3f87                                                                                         About a minute ago   Exited              kindnet-cni               1                   5233e076edceb       kindnet-9q2tv
	I0514 00:18:11.012482    4316 command_runner.go:130] > b142687b621f1       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   468a0e2976ae4       storage-provisioner
	I0514 00:18:11.012482    4316 command_runner.go:130] > b2a1b31cd7dee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   a8ac60a565998       kube-proxy-zhcz6
	I0514 00:18:11.012584    4316 command_runner.go:130] > 08450c853590d       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   419648c0d4053       etcd-multinode-101100
	I0514 00:18:11.012647    4316 command_runner.go:130] > da9e6534cd87d       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   509b8407e0955       kube-apiserver-multinode-101100
	I0514 00:18:11.012709    4316 command_runner.go:130] > d3581c1c570cf       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   ddcaadef980ac       kube-scheduler-multinode-101100
	I0514 00:18:11.012771    4316 command_runner.go:130] > b87239d1199ab       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   659643d47b9ae       kube-controller-manager-multinode-101100
	I0514 00:18:11.012840    4316 command_runner.go:130] > 57dea5416eb67       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago       Exited              busybox                   0                   76d1b8ce19aba       busybox-fc5497c4f-xqj6w
	I0514 00:18:11.012902    4316 command_runner.go:130] > 76c5ab7859eff       cbb01a7bd410d                                                                                         21 minutes ago       Exited              coredns                   0                   8bb49b28c842a       coredns-7db6d8ff4d-4kmx4
	I0514 00:18:11.012970    4316 command_runner.go:130] > 91edaaa00da23       a0bf559e280cf                                                                                         21 minutes ago       Exited              kube-proxy                0                   9bd694480978f       kube-proxy-zhcz6
	I0514 00:18:11.013032    4316 command_runner.go:130] > e96f94398d6dd       c7aad43836fa5                                                                                         22 minutes ago       Exited              kube-controller-manager   0                   da9268fd6556b       kube-controller-manager-multinode-101100
	I0514 00:18:11.013093    4316 command_runner.go:130] > 964887fc5d362       259c8277fcbbc                                                                                         22 minutes ago       Exited              kube-scheduler            0                   fcb3b27edcd2a       kube-scheduler-multinode-101100
	I0514 00:18:13.531771    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:18:13.531771    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:13.531841    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:13.531841    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:13.537239    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:18:13.537239    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:13.537239    4316 round_trippers.go:580]     Audit-Id: 8989f81f-81b8-463b-8a74-473c5dfd49a5
	I0514 00:18:13.537239    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:13.537239    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:13.537239    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:13.537239    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:13.537239    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:13 GMT
	I0514 00:18:13.539774    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1863"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0514 00:18:13.545454    4316 system_pods.go:59] 12 kube-system pods found
	I0514 00:18:13.545454    4316 system_pods.go:61] "coredns-7db6d8ff4d-4kmx4" [06858a47-f51b-48d8-a2a6-f60b8107be13] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "etcd-multinode-101100" [74cd34fe-a56b-453d-afb3-a9db3db0d5ba] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kindnet-2lwsm" [26b8beff-9849-4cbf-9a2b-8ef6354fa5ca] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kindnet-9q2tv" [5b3ee167-f21f-46b3-bace-03a7233717e0] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kindnet-tfbt8" [95a6d195-9e10-4569-902b-b56e495c9b86] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-apiserver-multinode-101100" [60889645-4c2d-4cfc-b322-c0f1b6e34503] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-controller-manager-multinode-101100" [1a74381a-7477-4fd3-b344-c4a230014f97] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-proxy-8zsgn" [af208cbd-fa8a-4822-9b19-dc30f63fa59c] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-proxy-b25hq" [d39f5818-3e88-4162-a7ce-734ca28103bf] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-proxy-zhcz6" [a9a488af-41ba-47f3-87b0-5a2f062afad6] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "kube-scheduler-multinode-101100" [d7300c2d-377f-4061-bd34-5f7593b7e827] Running
	I0514 00:18:13.545454    4316 system_pods.go:61] "storage-provisioner" [a92f04b8-a93f-42d8-81d7-d4da6bf2e247] Running
	I0514 00:18:13.545454    4316 system_pods.go:74] duration metric: took 3.6060276s to wait for pod list to return data ...
	I0514 00:18:13.545454    4316 default_sa.go:34] waiting for default service account to be created ...
	I0514 00:18:13.545454    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/default/serviceaccounts
	I0514 00:18:13.545454    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:13.545454    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:13.545454    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:13.552270    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:18:13.552270    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:13.552270    4316 round_trippers.go:580]     Content-Length: 262
	I0514 00:18:13.552270    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:13 GMT
	I0514 00:18:13.552270    4316 round_trippers.go:580]     Audit-Id: eef845ef-8759-43a4-838e-441516c8f729
	I0514 00:18:13.552270    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:13.552270    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:13.552270    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:13.552270    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:13.552270    4316 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1864"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f8245e64-9479-49b1-8b02-d2e6351373e3","resourceVersion":"345","creationTimestamp":"2024-05-13T23:56:23Z"}}]}
	I0514 00:18:13.552270    4316 default_sa.go:45] found service account: "default"
	I0514 00:18:13.553293    4316 default_sa.go:55] duration metric: took 7.8381ms for default service account to be created ...
	I0514 00:18:13.553293    4316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0514 00:18:13.553293    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:18:13.553293    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:13.553293    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:13.553293    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:13.557410    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:18:13.557410    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:13.557410    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:13.557410    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:13.557410    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:13.557410    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:13 GMT
	I0514 00:18:13.557410    4316 round_trippers.go:580]     Audit-Id: 36974bc1-4a34-4f83-9e69-655bb9bb1689
	I0514 00:18:13.557410    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:13.559046    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1864"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0514 00:18:13.562523    4316 system_pods.go:86] 12 kube-system pods found
	I0514 00:18:13.562523    4316 system_pods.go:89] "coredns-7db6d8ff4d-4kmx4" [06858a47-f51b-48d8-a2a6-f60b8107be13] Running
	I0514 00:18:13.562523    4316 system_pods.go:89] "etcd-multinode-101100" [74cd34fe-a56b-453d-afb3-a9db3db0d5ba] Running
	I0514 00:18:13.562523    4316 system_pods.go:89] "kindnet-2lwsm" [26b8beff-9849-4cbf-9a2b-8ef6354fa5ca] Running
	I0514 00:18:13.562523    4316 system_pods.go:89] "kindnet-9q2tv" [5b3ee167-f21f-46b3-bace-03a7233717e0] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kindnet-tfbt8" [95a6d195-9e10-4569-902b-b56e495c9b86] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-apiserver-multinode-101100" [60889645-4c2d-4cfc-b322-c0f1b6e34503] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-controller-manager-multinode-101100" [1a74381a-7477-4fd3-b344-c4a230014f97] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-proxy-8zsgn" [af208cbd-fa8a-4822-9b19-dc30f63fa59c] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-proxy-b25hq" [d39f5818-3e88-4162-a7ce-734ca28103bf] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-proxy-zhcz6" [a9a488af-41ba-47f3-87b0-5a2f062afad6] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "kube-scheduler-multinode-101100" [d7300c2d-377f-4061-bd34-5f7593b7e827] Running
	I0514 00:18:13.562606    4316 system_pods.go:89] "storage-provisioner" [a92f04b8-a93f-42d8-81d7-d4da6bf2e247] Running
	I0514 00:18:13.562606    4316 system_pods.go:126] duration metric: took 9.3132ms to wait for k8s-apps to be running ...
	I0514 00:18:13.562606    4316 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 00:18:13.569709    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 00:18:13.593636    4316 system_svc.go:56] duration metric: took 31.0274ms WaitForService to wait for kubelet
	I0514 00:18:13.593636    4316 kubeadm.go:576] duration metric: took 1m13.9197873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 00:18:13.593636    4316 node_conditions.go:102] verifying NodePressure condition ...
	I0514 00:18:13.593818    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes
	I0514 00:18:13.593818    4316 round_trippers.go:469] Request Headers:
	I0514 00:18:13.593818    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:18:13.593818    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:18:13.596012    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:18:13.597047    4316 round_trippers.go:577] Response Headers:
	I0514 00:18:13.597085    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:18:13.597085    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:18:13.597085    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:18:13.597085    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:18:13.597085    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:18:13 GMT
	I0514 00:18:13.597085    4316 round_trippers.go:580]     Audit-Id: 393d0d3e-05bc-4242-9acb-37031f44ad8c
	I0514 00:18:13.597594    4316 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1864"},"items":[{"metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16259 chars]
	I0514 00:18:13.598655    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:18:13.598755    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:18:13.598755    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:18:13.598755    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:18:13.598755    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:18:13.598755    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:18:13.598755    4316 node_conditions.go:105] duration metric: took 5.1189ms to run NodePressure ...
	I0514 00:18:13.598755    4316 start.go:240] waiting for startup goroutines ...
	I0514 00:18:13.598755    4316 start.go:245] waiting for cluster config update ...
	I0514 00:18:13.598906    4316 start.go:254] writing updated cluster config ...
	I0514 00:18:13.602892    4316 out.go:177] 
	I0514 00:18:13.606106    4316 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:18:13.617662    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:18:13.618329    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:18:13.622517    4316 out.go:177] * Starting "multinode-101100-m02" worker node in "multinode-101100" cluster
	I0514 00:18:13.626047    4316 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 00:18:13.626047    4316 cache.go:56] Caching tarball of preloaded images
	I0514 00:18:13.627409    4316 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0514 00:18:13.627563    4316 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0514 00:18:13.627740    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:18:13.629940    4316 start.go:360] acquireMachinesLock for multinode-101100-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0514 00:18:13.630021    4316 start.go:364] duration metric: took 80.7µs to acquireMachinesLock for "multinode-101100-m02"
	I0514 00:18:13.630207    4316 start.go:96] Skipping create...Using existing machine configuration
	I0514 00:18:13.630207    4316 fix.go:54] fixHost starting: m02
	I0514 00:18:13.630594    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:15.595326    4316 main.go:141] libmachine: [stdout =====>] : Off
	
	I0514 00:18:15.595326    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:15.595326    4316 fix.go:112] recreateIfNeeded on multinode-101100-m02: state=Stopped err=<nil>
	W0514 00:18:15.595326    4316 fix.go:138] unexpected machine state, will restart: <nil>
	I0514 00:18:15.597802    4316 out.go:177] * Restarting existing hyperv VM for "multinode-101100-m02" ...
	I0514 00:18:15.602068    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-101100-m02
	I0514 00:18:18.419508    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:18:18.419508    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:18.419508    4316 main.go:141] libmachine: Waiting for host to start...
	I0514 00:18:18.419508    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:20.447253    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:20.447253    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:20.447636    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:22.715374    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:18:22.716248    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:23.719516    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:25.665983    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:25.665983    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:25.665983    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:27.939881    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:18:27.939881    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:28.955227    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:30.938759    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:30.939457    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:30.939529    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:33.186613    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:18:33.187320    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:34.191867    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:36.230333    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:36.230333    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:36.230333    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:38.489721    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:18:38.489721    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:39.505162    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:41.491972    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:41.492654    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:41.492654    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:43.849440    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:18:43.850042    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:43.851849    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:45.777415    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:45.777415    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:45.777415    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:48.084790    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:18:48.084790    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:48.084790    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:18:48.086861    4316 machine.go:94] provisionDockerMachine start ...
	I0514 00:18:48.086913    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:50.013257    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:50.013257    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:50.013331    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:52.325832    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:18:52.325832    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:52.329461    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:18:52.330089    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:18:52.330089    4316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 00:18:52.466043    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0514 00:18:52.466043    4316 buildroot.go:166] provisioning hostname "multinode-101100-m02"
	I0514 00:18:52.466043    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:54.355964    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:54.355964    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:54.356414    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:18:56.624255    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:18:56.624255    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:56.628345    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:18:56.628478    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:18:56.628478    4316 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-101100-m02 && echo "multinode-101100-m02" | sudo tee /etc/hostname
	I0514 00:18:56.781283    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101100-m02
	
	I0514 00:18:56.781283    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:18:58.701836    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:18:58.702750    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:18:58.702750    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:00.983214    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:00.983214    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:00.987311    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:00.987488    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:00.987488    4316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-101100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-101100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-101100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 00:19:01.132677    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 00:19:01.132793    4316 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 00:19:01.132793    4316 buildroot.go:174] setting up certificates
	I0514 00:19:01.132793    4316 provision.go:84] configureAuth start
	I0514 00:19:01.132876    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:03.065570    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:03.065570    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:03.065570    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:05.447599    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:05.447599    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:05.447877    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:07.392388    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:07.392388    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:07.392634    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:09.718980    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:09.720082    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:09.720082    4316 provision.go:143] copyHostCerts
	I0514 00:19:09.720082    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0514 00:19:09.720082    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 00:19:09.720082    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 00:19:09.720791    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 00:19:09.721397    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0514 00:19:09.721926    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 00:19:09.722009    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 00:19:09.722009    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 00:19:09.723222    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0514 00:19:09.724232    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 00:19:09.724232    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 00:19:09.724232    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 00:19:09.725680    4316 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-101100-m02 san=[127.0.0.1 172.23.97.128 localhost minikube multinode-101100-m02]
	I0514 00:19:10.051821    4316 provision.go:177] copyRemoteCerts
	I0514 00:19:10.061215    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 00:19:10.061363    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:12.012557    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:12.012557    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:12.012557    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:14.334062    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:14.334062    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:14.334569    4316 sshutil.go:53] new ssh client: &{IP:172.23.97.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:19:14.449932    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.388437s)
	I0514 00:19:14.449932    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0514 00:19:14.449932    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 00:19:14.499297    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0514 00:19:14.499826    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0514 00:19:14.546386    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0514 00:19:14.547091    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0514 00:19:14.587714    4316 provision.go:87] duration metric: took 13.4539789s to configureAuth
	I0514 00:19:14.587714    4316 buildroot.go:189] setting minikube options for container-runtime
	I0514 00:19:14.588629    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:19:14.588629    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:16.496233    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:16.496233    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:16.496233    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:18.751837    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:18.751837    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:18.756423    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:18.757016    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:18.757016    4316 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 00:19:18.892580    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 00:19:18.892580    4316 buildroot.go:70] root file system type: tmpfs
	I0514 00:19:18.892775    4316 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 00:19:18.892831    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:20.791914    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:20.792235    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:20.792235    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:23.067078    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:23.067689    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:23.071582    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:23.072106    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:23.072189    4316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.102.122"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 00:19:23.233387    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.102.122
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 00:19:23.233539    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:25.121872    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:25.121872    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:25.122396    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:27.370520    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:27.370593    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:27.375540    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:27.375540    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:27.375540    4316 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 00:19:29.620481    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 00:19:29.620590    4316 machine.go:97] duration metric: took 41.5310232s to provisionDockerMachine
	I0514 00:19:29.620590    4316 start.go:293] postStartSetup for "multinode-101100-m02" (driver="hyperv")
	I0514 00:19:29.620590    4316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 00:19:29.630170    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 00:19:29.630170    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:31.552911    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:31.553116    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:31.553148    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:33.784752    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:33.784752    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:33.785375    4316 sshutil.go:53] new ssh client: &{IP:172.23.97.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:19:33.893903    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.26346s)
	I0514 00:19:33.906836    4316 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 00:19:33.915351    4316 command_runner.go:130] > NAME=Buildroot
	I0514 00:19:33.915351    4316 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0514 00:19:33.915351    4316 command_runner.go:130] > ID=buildroot
	I0514 00:19:33.915351    4316 command_runner.go:130] > VERSION_ID=2023.02.9
	I0514 00:19:33.915351    4316 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0514 00:19:33.916035    4316 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 00:19:33.916035    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 00:19:33.916574    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 00:19:33.917658    4316 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 00:19:33.917658    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0514 00:19:33.927803    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 00:19:33.945054    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 00:19:33.988022    4316 start.go:296] duration metric: took 4.367152s for postStartSetup
	I0514 00:19:33.988022    4316 fix.go:56] duration metric: took 1m20.3526907s for fixHost
	I0514 00:19:33.988022    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:35.871620    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:35.871887    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:35.871968    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:38.102858    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:38.103492    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:38.108496    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:38.108496    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:38.108496    4316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 00:19:38.237822    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715645978.467786522
	
	I0514 00:19:38.238360    4316 fix.go:216] guest clock: 1715645978.467786522
	I0514 00:19:38.238360    4316 fix.go:229] Guest: 2024-05-14 00:19:38.467786522 +0000 UTC Remote: 2024-05-14 00:19:33.9880222 +0000 UTC m=+277.905688301 (delta=4.479764322s)
	I0514 00:19:38.238463    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:40.120852    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:40.121011    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:40.121011    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:42.346874    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:42.346874    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:42.351562    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:19:42.351562    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.97.128 22 <nil> <nil>}
	I0514 00:19:42.351562    4316 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715645978
	I0514 00:19:42.503079    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 00:19:38 UTC 2024
	
	I0514 00:19:42.503133    4316 fix.go:236] clock set: Tue May 14 00:19:38 UTC 2024
	 (err=<nil>)
	I0514 00:19:42.503133    4316 start.go:83] releasing machines lock for "multinode-101100-m02", held for 1m28.8673503s
	I0514 00:19:42.503403    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:44.384635    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:44.384635    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:44.384635    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:46.653431    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:46.653431    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:46.656221    4316 out.go:177] * Found network options:
	I0514 00:19:46.658525    4316 out.go:177]   - NO_PROXY=172.23.102.122
	W0514 00:19:46.660915    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	I0514 00:19:46.662961    4316 out.go:177]   - NO_PROXY=172.23.102.122
	W0514 00:19:46.666175    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	W0514 00:19:46.667615    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	I0514 00:19:46.669610    4316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 00:19:46.669684    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:46.677572    4316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0514 00:19:46.678153    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:19:48.615764    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:48.615824    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:48.615824    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:48.640727    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:48.641101    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:48.641101    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:19:51.004312    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:51.004312    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:51.005234    4316 sshutil.go:53] new ssh client: &{IP:172.23.97.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:19:51.025238    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:19:51.025238    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:51.025238    4316 sshutil.go:53] new ssh client: &{IP:172.23.97.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:19:51.208753    4316 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0514 00:19:51.215947    4316 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5459721s)
	I0514 00:19:51.215947    4316 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0514 00:19:51.215947    4316 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.538085s)
	W0514 00:19:51.215947    4316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 00:19:51.225018    4316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 00:19:51.250823    4316 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0514 00:19:51.251610    4316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 00:19:51.251681    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:19:51.251681    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:19:51.281668    4316 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0514 00:19:51.290468    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 00:19:51.316939    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 00:19:51.334713    4316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 00:19:51.342698    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 00:19:51.368019    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:19:51.396019    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 00:19:51.422277    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:19:51.450060    4316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 00:19:51.476813    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 00:19:51.503148    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 00:19:51.528279    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 00:19:51.555277    4316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 00:19:51.572253    4316 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0514 00:19:51.580107    4316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 00:19:51.605106    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:19:51.773755    4316 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 00:19:51.800702    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:19:51.811030    4316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 00:19:51.830848    4316 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0514 00:19:51.830848    4316 command_runner.go:130] > [Unit]
	I0514 00:19:51.830848    4316 command_runner.go:130] > Description=Docker Application Container Engine
	I0514 00:19:51.830848    4316 command_runner.go:130] > Documentation=https://docs.docker.com
	I0514 00:19:51.830848    4316 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0514 00:19:51.830848    4316 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0514 00:19:51.830848    4316 command_runner.go:130] > StartLimitBurst=3
	I0514 00:19:51.830848    4316 command_runner.go:130] > StartLimitIntervalSec=60
	I0514 00:19:51.830848    4316 command_runner.go:130] > [Service]
	I0514 00:19:51.830848    4316 command_runner.go:130] > Type=notify
	I0514 00:19:51.830848    4316 command_runner.go:130] > Restart=on-failure
	I0514 00:19:51.830848    4316 command_runner.go:130] > Environment=NO_PROXY=172.23.102.122
	I0514 00:19:51.830848    4316 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0514 00:19:51.830848    4316 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0514 00:19:51.830848    4316 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0514 00:19:51.830848    4316 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0514 00:19:51.830848    4316 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0514 00:19:51.830848    4316 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0514 00:19:51.830848    4316 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0514 00:19:51.830848    4316 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0514 00:19:51.830848    4316 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0514 00:19:51.830848    4316 command_runner.go:130] > ExecStart=
	I0514 00:19:51.830848    4316 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0514 00:19:51.830848    4316 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0514 00:19:51.830848    4316 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0514 00:19:51.830848    4316 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0514 00:19:51.830848    4316 command_runner.go:130] > LimitNOFILE=infinity
	I0514 00:19:51.830848    4316 command_runner.go:130] > LimitNPROC=infinity
	I0514 00:19:51.830848    4316 command_runner.go:130] > LimitCORE=infinity
	I0514 00:19:51.830848    4316 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0514 00:19:51.830848    4316 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0514 00:19:51.830848    4316 command_runner.go:130] > TasksMax=infinity
	I0514 00:19:51.830848    4316 command_runner.go:130] > TimeoutStartSec=0
	I0514 00:19:51.830848    4316 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0514 00:19:51.830848    4316 command_runner.go:130] > Delegate=yes
	I0514 00:19:51.831378    4316 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0514 00:19:51.831378    4316 command_runner.go:130] > KillMode=process
	I0514 00:19:51.831378    4316 command_runner.go:130] > [Install]
	I0514 00:19:51.831378    4316 command_runner.go:130] > WantedBy=multi-user.target
	I0514 00:19:51.839535    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:19:51.865772    4316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 00:19:51.912691    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:19:51.951980    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:19:51.983632    4316 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 00:19:52.045579    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:19:52.067656    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:19:52.098073    4316 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0514 00:19:52.111889    4316 ssh_runner.go:195] Run: which cri-dockerd
	I0514 00:19:52.119036    4316 command_runner.go:130] > /usr/bin/cri-dockerd
	I0514 00:19:52.127858    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 00:19:52.144937    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 00:19:52.185057    4316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 00:19:52.357323    4316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 00:19:52.544596    4316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 00:19:52.544732    4316 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 00:19:52.586210    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:19:52.769373    4316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 00:19:55.326422    4316 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5568288s)
	I0514 00:19:55.334572    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 00:19:55.364366    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:19:55.398019    4316 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 00:19:55.571997    4316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 00:19:55.742930    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:19:55.921722    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 00:19:55.959197    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:19:55.989752    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:19:56.162754    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 00:19:56.260792    4316 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 00:19:56.268642    4316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 00:19:56.276468    4316 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0514 00:19:56.276468    4316 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0514 00:19:56.276590    4316 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0514 00:19:56.276590    4316 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0514 00:19:56.276590    4316 command_runner.go:130] > Access: 2024-05-14 00:19:56.418179553 +0000
	I0514 00:19:56.276590    4316 command_runner.go:130] > Modify: 2024-05-14 00:19:56.418179553 +0000
	I0514 00:19:56.276590    4316 command_runner.go:130] > Change: 2024-05-14 00:19:56.421179722 +0000
	I0514 00:19:56.276590    4316 command_runner.go:130] >  Birth: -
	I0514 00:19:56.276590    4316 start.go:562] Will wait 60s for crictl version
	I0514 00:19:56.284826    4316 ssh_runner.go:195] Run: which crictl
	I0514 00:19:56.290588    4316 command_runner.go:130] > /usr/bin/crictl
	I0514 00:19:56.299029    4316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 00:19:56.349024    4316 command_runner.go:130] > Version:  0.1.0
	I0514 00:19:56.349024    4316 command_runner.go:130] > RuntimeName:  docker
	I0514 00:19:56.349285    4316 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0514 00:19:56.349285    4316 command_runner.go:130] > RuntimeApiVersion:  v1
	I0514 00:19:56.349285    4316 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 00:19:56.356061    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:19:56.381668    4316 command_runner.go:130] > 26.0.2
	I0514 00:19:56.390685    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:19:56.416664    4316 command_runner.go:130] > 26.0.2
	I0514 00:19:56.421104    4316 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 00:19:56.423482    4316 out.go:177]   - env NO_PROXY=172.23.102.122
	I0514 00:19:56.425105    4316 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 00:19:56.428661    4316 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 00:19:56.428661    4316 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 00:19:56.428661    4316 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 00:19:56.428661    4316 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 00:19:56.430655    4316 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 00:19:56.430655    4316 ip.go:210] interface addr: 172.23.96.1/20
	I0514 00:19:56.440655    4316 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 00:19:56.446687    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:19:56.465727    4316 mustload.go:65] Loading cluster: multinode-101100
	I0514 00:19:56.466336    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:19:56.466947    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:19:58.362298    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:19:58.362298    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:19:58.362298    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:19:58.363737    4316 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100 for IP: 172.23.97.128
	I0514 00:19:58.363737    4316 certs.go:194] generating shared ca certs ...
	I0514 00:19:58.363828    4316 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:19:58.364332    4316 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 00:19:58.364566    4316 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 00:19:58.364808    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0514 00:19:58.365072    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0514 00:19:58.365213    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0514 00:19:58.365213    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0514 00:19:58.365213    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 00:19:58.365812    4316 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 00:19:58.365843    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 00:19:58.366079    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 00:19:58.366293    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 00:19:58.366436    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 00:19:58.366436    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 00:19:58.366436    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:19:58.366962    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0514 00:19:58.367043    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0514 00:19:58.367261    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 00:19:58.414702    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 00:19:58.459357    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 00:19:58.503434    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 00:19:58.545685    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 00:19:58.587861    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 00:19:58.629568    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 00:19:58.680987    4316 ssh_runner.go:195] Run: openssl version
	I0514 00:19:58.688460    4316 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0514 00:19:58.698963    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 00:19:58.725027    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 00:19:58.731571    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:19:58.731669    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:19:58.739103    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 00:19:58.747592    4316 command_runner.go:130] > 51391683
	I0514 00:19:58.754967    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 00:19:58.782080    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 00:19:58.809376    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 00:19:58.814825    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:19:58.815513    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:19:58.823670    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 00:19:58.831445    4316 command_runner.go:130] > 3ec20f2e
	I0514 00:19:58.839843    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 00:19:58.870367    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 00:19:58.896373    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:19:58.904136    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:19:58.904136    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:19:58.911982    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:19:58.920632    4316 command_runner.go:130] > b5213941
	I0514 00:19:58.930068    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 00:19:58.957075    4316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 00:19:58.964129    4316 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 00:19:58.964129    4316 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 00:19:58.964657    4316 kubeadm.go:928] updating node {m02 172.23.97.128 8443 v1.30.0 docker false true} ...
	I0514 00:19:58.964749    4316 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-101100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.97.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0514 00:19:58.972565    4316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 00:19:58.990072    4316 command_runner.go:130] > kubeadm
	I0514 00:19:58.990072    4316 command_runner.go:130] > kubectl
	I0514 00:19:58.990072    4316 command_runner.go:130] > kubelet
	I0514 00:19:58.990072    4316 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 00:19:59.001506    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0514 00:19:59.018193    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0514 00:19:59.047911    4316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 00:19:59.084815    4316 ssh_runner.go:195] Run: grep 172.23.102.122	control-plane.minikube.internal$ /etc/hosts
	I0514 00:19:59.090918    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.102.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:19:59.118549    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:19:59.295846    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:19:59.320107    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:19:59.320829    4316 start.go:316] joinCluster: &{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.122 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.102.231 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provi
sioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 00:19:59.320939    4316 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0514 00:19:59.321044    4316 host.go:66] Checking if "multinode-101100-m02" exists ...
	I0514 00:19:59.321423    4316 mustload.go:65] Loading cluster: multinode-101100
	I0514 00:19:59.321782    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:19:59.322241    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:20:01.267151    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:01.267701    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:01.267701    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:20:01.268409    4316 api_server.go:166] Checking apiserver status ...
	I0514 00:20:01.281769    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:20:01.281769    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:20:03.282523    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:03.283066    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:03.283066    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:05.625428    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:20:05.626217    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:05.626634    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:20:05.742922    4316 command_runner.go:130] > 1838
	I0514 00:20:05.743002    4316 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.4609465s)
	I0514 00:20:05.753442    4316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1838/cgroup
	W0514 00:20:05.770851    4316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1838/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0514 00:20:05.782794    4316 ssh_runner.go:195] Run: ls
	I0514 00:20:05.789601    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:20:05.798214    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 200:
	ok
	I0514 00:20:05.806299    4316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-101100-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0514 00:20:05.964406    4316 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-2lwsm, kube-system/kube-proxy-b25hq
	I0514 00:20:08.986976    4316 command_runner.go:130] > node/multinode-101100-m02 cordoned
	I0514 00:20:08.987180    4316 command_runner.go:130] > pod "busybox-fc5497c4f-q7442" has DeletionTimestamp older than 1 seconds, skipping
	I0514 00:20:08.987180    4316 command_runner.go:130] > node/multinode-101100-m02 drained
	I0514 00:20:08.987298    4316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-101100-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1807388s)
	I0514 00:20:08.987298    4316 node.go:128] successfully drained node "multinode-101100-m02"
	I0514 00:20:08.987425    4316 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0514 00:20:08.987592    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:20:10.872392    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:10.872392    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:10.872392    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:13.132030    4316 main.go:141] libmachine: [stdout =====>] : 172.23.97.128
	
	I0514 00:20:13.132030    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:13.132030    4316 sshutil.go:53] new ssh client: &{IP:172.23.97.128 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:20:13.514414    4316 command_runner.go:130] ! W0514 00:20:13.747274    1538 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0514 00:20:14.000423    4316 command_runner.go:130] ! W0514 00:20:14.233795    1538 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod a7476f13d104b3e1959acab279fd2b27a5c1e30de2afc09d28850c1a79234209: output: E0514 00:20:13.966689    1577 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-q7442_default\" network: cni config uninitialized" podSandboxID="a7476f13d104b3e1959acab279fd2b27a5c1e30de2afc09d28850c1a79234209"
	I0514 00:20:14.000423    4316 command_runner.go:130] ! time="2024-05-14T00:20:13Z" level=fatal msg="stopping the pod sandbox \"a7476f13d104b3e1959acab279fd2b27a5c1e30de2afc09d28850c1a79234209\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-q7442_default\" network: cni config uninitialized"
	I0514 00:20:14.000423    4316 command_runner.go:130] ! : exit status 1
	I0514 00:20:14.020545    4316 command_runner.go:130] > [preflight] Running pre-flight checks
	I0514 00:20:14.020668    4316 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0514 00:20:14.020668    4316 command_runner.go:130] > [reset] Stopping the kubelet service
	I0514 00:20:14.020668    4316 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0514 00:20:14.020668    4316 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0514 00:20:14.020668    4316 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0514 00:20:14.020668    4316 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0514 00:20:14.020668    4316 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0514 00:20:14.020668    4316 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0514 00:20:14.020668    4316 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0514 00:20:14.020839    4316 command_runner.go:130] > to reset your system's IPVS tables.
	I0514 00:20:14.020839    4316 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0514 00:20:14.020867    4316 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0514 00:20:14.020867    4316 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.0331194s)
	I0514 00:20:14.020867    4316 node.go:155] successfully reset node "multinode-101100-m02"
	I0514 00:20:14.021881    4316 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:20:14.022405    4316 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.102.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 00:20:14.023033    4316 cert_rotation.go:137] Starting client certificate rotation controller
	I0514 00:20:14.023643    4316 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0514 00:20:14.023643    4316 round_trippers.go:463] DELETE https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:14.023643    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:14.023643    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:14.023643    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:14.023643    4316 round_trippers.go:473]     Content-Type: application/json
	I0514 00:20:14.039561    4316 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0514 00:20:14.039561    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:14.039561    4316 round_trippers.go:580]     Content-Length: 171
	I0514 00:20:14.039561    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:14 GMT
	I0514 00:20:14.039561    4316 round_trippers.go:580]     Audit-Id: 9d463315-fe38-4c7b-b5a0-d43f8cd931fb
	I0514 00:20:14.039561    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:14.039561    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:14.039561    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:14.039561    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:14.039561    4316 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-101100-m02","kind":"nodes","uid":"0720b898-6ac6-43e1-b265-5a00940f1a85"}}
	I0514 00:20:14.040164    4316 node.go:180] successfully deleted node "multinode-101100-m02"
	I0514 00:20:14.040164    4316 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0514 00:20:14.040231    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0514 00:20:14.040291    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:20:15.927718    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:15.927718    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:15.927718    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:18.201548    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:20:18.201958    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:18.202397    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:20:18.374719    4316 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gyjkyc.rxhb3b7de4hp8phm --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0514 00:20:18.374719    4316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3342099s)
	I0514 00:20:18.374719    4316 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0514 00:20:18.374719    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gyjkyc.rxhb3b7de4hp8phm --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-101100-m02"
	I0514 00:20:18.563178    4316 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0514 00:20:19.902974    4316 command_runner.go:130] > [preflight] Running pre-flight checks
	I0514 00:20:19.902974    4316 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0514 00:20:19.902974    4316 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0514 00:20:19.902974    4316 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0514 00:20:19.903162    4316 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0514 00:20:19.903162    4316 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0514 00:20:19.903162    4316 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0514 00:20:19.903162    4316 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002330449s
	I0514 00:20:19.903274    4316 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0514 00:20:19.903274    4316 command_runner.go:130] > This node has joined the cluster:
	I0514 00:20:19.903328    4316 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0514 00:20:19.903364    4316 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0514 00:20:19.903364    4316 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0514 00:20:19.903364    4316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gyjkyc.rxhb3b7de4hp8phm --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-101100-m02": (1.5285473s)
	I0514 00:20:19.903364    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0514 00:20:20.109428    4316 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0514 00:20:20.291006    4316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-101100-m02 minikube.k8s.io/updated_at=2024_05_14T00_20_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=multinode-101100 minikube.k8s.io/primary=false
	I0514 00:20:20.403705    4316 command_runner.go:130] > node/multinode-101100-m02 labeled
	I0514 00:20:20.403803    4316 start.go:318] duration metric: took 21.0816221s to joinCluster
	I0514 00:20:20.403895    4316 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0514 00:20:20.407621    4316 out.go:177] * Verifying Kubernetes components...
	I0514 00:20:20.404440    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:20:20.420742    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:20:20.628880    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:20:20.663973    4316 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:20:20.664375    4316 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.102.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 00:20:20.665089    4316 node_ready.go:35] waiting up to 6m0s for node "multinode-101100-m02" to be "Ready" ...
	I0514 00:20:20.665089    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:20.665089    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:20.665089    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:20.665089    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:20.677455    4316 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0514 00:20:20.677455    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:20.677455    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:20.677455    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:20.677455    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:20.677455    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:20 GMT
	I0514 00:20:20.677455    4316 round_trippers.go:580]     Audit-Id: 4f488f36-facd-4f63-be23-a295b926cc9a
	I0514 00:20:20.677455    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:20.677455    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"1999","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0514 00:20:21.178898    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:21.179047    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:21.179047    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:21.179047    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:21.189724    4316 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0514 00:20:21.189724    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:21.189724    4316 round_trippers.go:580]     Audit-Id: 7904380d-f5cd-4f00-81c9-968f56135bb0
	I0514 00:20:21.189724    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:21.189724    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:21.189724    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:21.189724    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:21.189724    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:21 GMT
	I0514 00:20:21.189724    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"1999","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0514 00:20:21.668458    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:21.668458    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:21.668458    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:21.668458    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:21.674275    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:20:21.674275    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:21.674275    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:21 GMT
	I0514 00:20:21.674275    4316 round_trippers.go:580]     Audit-Id: cd16b4f3-67c4-4c90-9b2d-78228fd691f5
	I0514 00:20:21.674275    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:21.674275    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:21.674275    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:21.674275    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:21.675139    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"1999","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0514 00:20:22.175885    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:22.175885    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:22.175885    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:22.175885    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:22.179828    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:22.180344    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:22.180344    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:22 GMT
	I0514 00:20:22.180344    4316 round_trippers.go:580]     Audit-Id: c458170d-00d4-4dae-b03d-855900e80ad8
	I0514 00:20:22.180344    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:22.180344    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:22.180344    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:22.180344    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:22.180344    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"1999","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0514 00:20:22.675734    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:22.675805    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:22.675805    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:22.675805    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:22.678052    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:22.678052    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:22.678052    4316 round_trippers.go:580]     Audit-Id: d2bbaeba-16e5-4d26-99e5-bb2962aa8b6b
	I0514 00:20:22.678052    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:22.678052    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:22.678052    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:22.678052    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:22.678842    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:22 GMT
	I0514 00:20:22.678842    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"1999","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0514 00:20:22.678842    4316 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0514 00:20:23.174368    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:23.174812    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:23.175029    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:23.175029    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:23.178422    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:23.178873    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:23.178873    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:23 GMT
	I0514 00:20:23.178941    4316 round_trippers.go:580]     Audit-Id: e3d84f71-b647-4a4f-a589-f5db06f83577
	I0514 00:20:23.178941    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:23.178941    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:23.178941    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:23.178941    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:23.179425    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:23.675729    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:23.675729    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:23.675729    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:23.675729    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:23.678954    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:23.678954    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:23.678954    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:23.678954    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:23 GMT
	I0514 00:20:23.678954    4316 round_trippers.go:580]     Audit-Id: 83b98649-baaf-48bb-a953-f2b2a96298a4
	I0514 00:20:23.678954    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:23.678954    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:23.678954    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:23.679290    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:24.172786    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:24.172786    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:24.172862    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:24.172862    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:24.176750    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:24.176750    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:24.176750    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:24.176750    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:24.176750    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:24.176750    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:24 GMT
	I0514 00:20:24.176750    4316 round_trippers.go:580]     Audit-Id: 06b5bf1b-4975-48b3-a94e-dedbae892198
	I0514 00:20:24.176750    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:24.177368    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:24.673432    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:24.673432    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:24.673432    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:24.673432    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:24.677995    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:20:24.678210    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:24.678210    4316 round_trippers.go:580]     Audit-Id: 7565a9bd-70ff-47d5-b68e-54e4bc889056
	I0514 00:20:24.678210    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:24.678210    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:24.678210    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:24.678210    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:24.678210    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:24 GMT
	I0514 00:20:24.678945    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:25.173269    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:25.173390    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:25.173390    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:25.173390    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:25.176577    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:25.176577    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:25.176577    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:25.176577    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:25.176577    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:25 GMT
	I0514 00:20:25.176577    4316 round_trippers.go:580]     Audit-Id: 018fba78-7a56-4803-93f4-61b7fae28f2f
	I0514 00:20:25.176577    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:25.177495    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:25.177595    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:25.178390    4316 node_ready.go:53] node "multinode-101100-m02" has status "Ready":"False"
	I0514 00:20:25.674685    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:25.674877    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:25.674877    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:25.674997    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:25.677798    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:25.677798    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:25.677798    4316 round_trippers.go:580]     Audit-Id: 632e6767-5a50-4eaf-b7aa-467bd2b002e1
	I0514 00:20:25.677798    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:25.677798    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:25.677798    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:25.678823    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:25.678823    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:25 GMT
	I0514 00:20:25.678968    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:26.176533    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:26.176655    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.176655    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.176655    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.181919    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:20:26.181919    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.181919    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.181919    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.181919    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.181919    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.181919    4316 round_trippers.go:580]     Audit-Id: d9c09ef0-8440-4dba-9ecc-5e59b4739c81
	I0514 00:20:26.181919    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.181919    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2022","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3674 chars]
	I0514 00:20:26.676841    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:26.677235    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.677235    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.677235    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.681043    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:26.681043    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.681043    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.681043    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.681043    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.681043    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.681386    4316 round_trippers.go:580]     Audit-Id: 52a26556-6065-49f4-b55a-c9ccf246bee1
	I0514 00:20:26.681386    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.681722    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2028","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3932 chars]
	I0514 00:20:26.682380    4316 node_ready.go:49] node "multinode-101100-m02" has status "Ready":"True"
	I0514 00:20:26.682490    4316 node_ready.go:38] duration metric: took 6.017016s for node "multinode-101100-m02" to be "Ready" ...
	I0514 00:20:26.682490    4316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:20:26.682725    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:20:26.682725    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.682725    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.682725    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.690117    4316 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0514 00:20:26.690117    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.690117    4316 round_trippers.go:580]     Audit-Id: afba9995-8927-4ba9-aca5-049f43a71e86
	I0514 00:20:26.690117    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.690117    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.690117    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.690117    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.690117    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.691742    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2031"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86160 chars]
	I0514 00:20:26.694767    4316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.695393    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:20:26.695393    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.695444    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.695444    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.697669    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.697669    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.697669    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.697669    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.697669    4316 round_trippers.go:580]     Audit-Id: 5f80650d-9d8a-413c-8296-41fb51db0810
	I0514 00:20:26.697669    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.697669    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.697669    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.698737    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0514 00:20:26.699401    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:26.699401    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.699500    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.699500    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.702063    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.702063    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.702063    4316 round_trippers.go:580]     Audit-Id: 4bcb1c3f-f4ab-41f0-bcb1-164cbd8354be
	I0514 00:20:26.702063    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.702063    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.702063    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.702063    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.702063    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.702449    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:26.702798    4316 pod_ready.go:92] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:26.702860    4316 pod_ready.go:81] duration metric: took 7.4951ms for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.702860    4316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.702927    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-101100
	I0514 00:20:26.702927    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.702927    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.702994    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.705361    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.705361    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.705361    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.705361    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.705361    4316 round_trippers.go:580]     Audit-Id: cd2f2035-d4c9-4f0f-ad29-1c24c05857e4
	I0514 00:20:26.705361    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.705361    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.705361    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.705906    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"74cd34fe-a56b-453d-afb3-a9db3db0d5ba","resourceVersion":"1779","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.122:2379","kubernetes.io/config.hash":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.mirror":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.seen":"2024-05-14T00:16:49.843121737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0514 00:20:26.705970    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:26.705970    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.705970    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.705970    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.708643    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.708643    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.708643    4316 round_trippers.go:580]     Audit-Id: 5e5f0078-02cf-4e35-af1f-329b3a2e82c5
	I0514 00:20:26.708643    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.708643    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.708643    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.708643    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.708643    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.708643    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:26.708643    4316 pod_ready.go:92] pod "etcd-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:26.708643    4316 pod_ready.go:81] duration metric: took 5.7829ms for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.708643    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.710079    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-101100
	I0514 00:20:26.710079    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.710079    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.710079    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.712127    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.712127    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.712127    4316 round_trippers.go:580]     Audit-Id: 29635718-3424-4556-b7b1-f7048c0ff12b
	I0514 00:20:26.712127    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.712127    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.712127    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.712127    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.712127    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.712127    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-101100","namespace":"kube-system","uid":"60889645-4c2d-4cfc-b322-c0f1b6e34503","resourceVersion":"1775","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.122:8443","kubernetes.io/config.hash":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.mirror":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.seen":"2024-05-14T00:16:49.778409853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0514 00:20:26.712127    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:26.712127    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.713235    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.713235    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.715268    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.715268    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.715595    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.715595    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.715595    4316 round_trippers.go:580]     Audit-Id: d75dec04-3818-4975-a61f-dbe1b34d57cb
	I0514 00:20:26.715595    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.715595    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.715635    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.715635    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:26.715635    4316 pod_ready.go:92] pod "kube-apiserver-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:26.715635    4316 pod_ready.go:81] duration metric: took 6.9916ms for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.715635    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.716239    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-101100
	I0514 00:20:26.716239    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.716279    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.716279    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.717886    4316 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0514 00:20:26.717886    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.717886    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.717886    4316 round_trippers.go:580]     Audit-Id: 0ee9a6e5-fc25-42b3-89ba-ad4b9bc32b3e
	I0514 00:20:26.717886    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.717886    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.717886    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.718738    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.718998    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-101100","namespace":"kube-system","uid":"1a74381a-7477-4fd3-b344-c4a230014f97","resourceVersion":"1752","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.mirror":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.seen":"2024-05-13T23:56:09.392106640Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0514 00:20:26.718998    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:26.719517    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.719517    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.719573    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.721694    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:20:26.722383    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.722447    4316 round_trippers.go:580]     Audit-Id: cca086da-6220-4532-9a39-cd003cd2256e
	I0514 00:20:26.722447    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.722447    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.722447    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.722447    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.722447    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:26 GMT
	I0514 00:20:26.722447    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:26.722447    4316 pod_ready.go:92] pod "kube-controller-manager-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:26.722447    4316 pod_ready.go:81] duration metric: took 6.2858ms for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.722447    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:26.879504    4316 request.go:629] Waited for 156.1624ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:20:26.879504    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:20:26.879504    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:26.879504    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:26.879504    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:26.883220    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:26.883220    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:26.883220    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:27 GMT
	I0514 00:20:26.883220    4316 round_trippers.go:580]     Audit-Id: 31ca88a8-1afd-4794-a7dc-768dedd04973
	I0514 00:20:26.883220    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:26.883220    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:26.883220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:26.883220    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:26.884206    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8zsgn","generateName":"kube-proxy-","namespace":"kube-system","uid":"af208cbd-fa8a-4822-9b19-dc30f63fa59c","resourceVersion":"1621","creationTimestamp":"2024-05-14T00:03:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0514 00:20:27.082955    4316 request.go:629] Waited for 198.1349ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:20:27.082955    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:20:27.082955    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:27.082955    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:27.082955    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:27.087392    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:27.087440    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:27.087440    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:27.087440    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:27.087440    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:27 GMT
	I0514 00:20:27.087440    4316 round_trippers.go:580]     Audit-Id: 09b30563-6a9e-4e45-81a3-ba9db26baa13
	I0514 00:20:27.087440    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:27.087440    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:27.087440    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"fd2d4a0b-dc97-4959-b2ba-0f51719ad2b3","resourceVersion":"1836","creationTimestamp":"2024-05-14T00:12:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_12_45_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:12:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0514 00:20:27.088084    4316 pod_ready.go:97] node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:20:27.088164    4316 pod_ready.go:81] duration metric: took 365.6932ms for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	E0514 00:20:27.088164    4316 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-101100-m03" hosting pod "kube-proxy-8zsgn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-101100-m03" has status "Ready":"Unknown"
	I0514 00:20:27.088164    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:27.286724    4316 request.go:629] Waited for 198.5478ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:20:27.286905    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:20:27.286905    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:27.286905    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:27.286905    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:27.290434    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:27.290434    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:27.290434    4316 round_trippers.go:580]     Audit-Id: 11e5a6ce-c5f5-4a8a-b5b2-e65b4e34c84c
	I0514 00:20:27.290434    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:27.290434    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:27.290434    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:27.290434    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:27.290434    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:27 GMT
	I0514 00:20:27.290900    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-b25hq","generateName":"kube-proxy-","namespace":"kube-system","uid":"d39f5818-3e88-4162-a7ce-734ca28103bf","resourceVersion":"2012","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5837 chars]
	I0514 00:20:27.487104    4316 request.go:629] Waited for 195.428ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:27.487104    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:20:27.487236    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:27.487236    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:27.487236    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:27.490649    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:20:27.491055    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:27.491055    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:27.491055    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:27.491055    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:27 GMT
	I0514 00:20:27.491055    4316 round_trippers.go:580]     Audit-Id: 7c1111cd-33b0-4052-8f89-f3f64bfbdf47
	I0514 00:20:27.491055    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:27.491055    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:27.491490    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2028","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3932 chars]
	I0514 00:20:27.491632    4316 pod_ready.go:92] pod "kube-proxy-b25hq" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:27.491632    4316 pod_ready.go:81] duration metric: took 403.4426ms for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:27.491632    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:27.690118    4316 request.go:629] Waited for 197.9417ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:20:27.690713    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:20:27.690713    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:27.690713    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:27.690713    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:27.702485    4316 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0514 00:20:27.702485    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:27.702485    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:27.702485    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:27.702485    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:27.702485    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:27.702485    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:27 GMT
	I0514 00:20:27.702485    4316 round_trippers.go:580]     Audit-Id: eb7f200a-9aed-42d0-8f92-a3053a93ae8f
	I0514 00:20:27.703212    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zhcz6","generateName":"kube-proxy-","namespace":"kube-system","uid":"a9a488af-41ba-47f3-87b0-5a2f062afad6","resourceVersion":"1732","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0514 00:20:27.877463    4316 request.go:629] Waited for 173.4471ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:27.877463    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:27.877587    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:27.877587    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:27.877587    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:27.882297    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:20:27.882297    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:27.882297    4316 round_trippers.go:580]     Audit-Id: d7d3e025-019f-44a9-9a52-bc5a3a24882d
	I0514 00:20:27.882297    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:27.882297    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:27.882297    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:27.882297    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:27.882297    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:28 GMT
	I0514 00:20:27.882297    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:27.883575    4316 pod_ready.go:92] pod "kube-proxy-zhcz6" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:27.883575    4316 pod_ready.go:81] duration metric: took 391.3861ms for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:27.883575    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:28.080394    4316 request.go:629] Waited for 196.8061ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:20:28.080613    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:20:28.080613    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:28.080613    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:28.080613    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:28.086458    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:20:28.086458    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:28.086458    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:28.086458    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:28 GMT
	I0514 00:20:28.086458    4316 round_trippers.go:580]     Audit-Id: 0fcc1969-0c8e-49e4-bb7a-ae562507ee61
	I0514 00:20:28.086458    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:28.086458    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:28.086458    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:28.086458    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-101100","namespace":"kube-system","uid":"d7300c2d-377f-4061-bd34-5f7593b7e827","resourceVersion":"1756","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.mirror":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.seen":"2024-05-13T23:56:09.392108241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0514 00:20:28.281620    4316 request.go:629] Waited for 194.481ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:28.281926    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:20:28.281926    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:28.281990    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:28.281990    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:28.288804    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:20:28.289345    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:28.289345    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:28 GMT
	I0514 00:20:28.289345    4316 round_trippers.go:580]     Audit-Id: beee54d5-4485-47b5-918d-8122b6f0e00b
	I0514 00:20:28.289457    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:28.289493    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:28.289531    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:28.289565    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:28.289926    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:20:28.290642    4316 pod_ready.go:92] pod "kube-scheduler-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:20:28.290642    4316 pod_ready.go:81] duration metric: took 407.0407ms for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:20:28.290748    4316 pod_ready.go:38] duration metric: took 1.6079386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:20:28.290805    4316 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 00:20:28.300837    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 00:20:28.322580    4316 system_svc.go:56] duration metric: took 31.7881ms WaitForService to wait for kubelet
	I0514 00:20:28.322580    4316 kubeadm.go:576] duration metric: took 7.9181778s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 00:20:28.323346    4316 node_conditions.go:102] verifying NodePressure condition ...
	I0514 00:20:28.485289    4316 request.go:629] Waited for 161.7138ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes
	I0514 00:20:28.485289    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes
	I0514 00:20:28.485289    4316 round_trippers.go:469] Request Headers:
	I0514 00:20:28.485289    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:20:28.485289    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:20:28.489493    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:20:28.489493    4316 round_trippers.go:577] Response Headers:
	I0514 00:20:28.489493    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:20:28.489493    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:20:28.489493    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:20:28.489493    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:20:28 GMT
	I0514 00:20:28.489493    4316 round_trippers.go:580]     Audit-Id: cbc88b87-5fbd-4db7-a59e-62381d76c441
	I0514 00:20:28.489493    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:20:28.490520    4316 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2034"},"items":[{"metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15489 chars]
	I0514 00:20:28.491575    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:20:28.491575    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:20:28.491575    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:20:28.491575    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:20:28.491575    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:20:28.491575    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:20:28.491575    4316 node_conditions.go:105] duration metric: took 168.2179ms to run NodePressure ...
	I0514 00:20:28.491575    4316 start.go:240] waiting for startup goroutines ...
	I0514 00:20:28.491688    4316 start.go:254] writing updated cluster config ...
	I0514 00:20:28.495940    4316 out.go:177] 
	I0514 00:20:28.498719    4316 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:20:28.506905    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:20:28.507068    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:20:28.513669    4316 out.go:177] * Starting "multinode-101100-m03" worker node in "multinode-101100" cluster
	I0514 00:20:28.517086    4316 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 00:20:28.517086    4316 cache.go:56] Caching tarball of preloaded images
	I0514 00:20:28.517889    4316 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0514 00:20:28.518037    4316 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0514 00:20:28.518258    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:20:28.521555    4316 start.go:360] acquireMachinesLock for multinode-101100-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0514 00:20:28.521623    4316 start.go:364] duration metric: took 68µs to acquireMachinesLock for "multinode-101100-m03"
	I0514 00:20:28.521785    4316 start.go:96] Skipping create...Using existing machine configuration
	I0514 00:20:28.521851    4316 fix.go:54] fixHost starting: m03
	I0514 00:20:28.522162    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:30.399299    4316 main.go:141] libmachine: [stdout =====>] : Off
	
	I0514 00:20:30.399299    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:30.399299    4316 fix.go:112] recreateIfNeeded on multinode-101100-m03: state=Stopped err=<nil>
	W0514 00:20:30.399374    4316 fix.go:138] unexpected machine state, will restart: <nil>
	I0514 00:20:30.401935    4316 out.go:177] * Restarting existing hyperv VM for "multinode-101100-m03" ...
	I0514 00:20:30.405567    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-101100-m03
	I0514 00:20:33.177006    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:20:33.177006    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:33.177006    4316 main.go:141] libmachine: Waiting for host to start...
	I0514 00:20:33.177089    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:35.181392    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:35.181392    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:35.181392    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:37.489532    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:20:37.490348    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:38.492807    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:40.424581    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:40.424581    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:40.424581    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:42.708894    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:20:42.708894    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:43.709651    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:45.696450    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:45.696450    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:45.696450    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:47.967696    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:20:47.967696    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:48.979385    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:50.995987    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:50.995987    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:50.996254    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:53.267989    4316 main.go:141] libmachine: [stdout =====>] : 
	I0514 00:20:53.267989    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:54.276705    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:20:56.240941    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:20:56.241739    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:56.241739    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:20:58.547415    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:20:58.547415    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:20:58.550805    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:00.416141    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:00.416141    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:00.416141    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:02.686191    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:02.686191    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:02.687104    4316 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100\config.json ...
	I0514 00:21:02.689123    4316 machine.go:94] provisionDockerMachine start ...
	I0514 00:21:02.689123    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:04.570102    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:04.570102    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:04.570194    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:06.831811    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:06.831811    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:06.835674    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:06.836017    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:06.836017    4316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 00:21:06.976410    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0514 00:21:06.976410    4316 buildroot.go:166] provisioning hostname "multinode-101100-m03"
	I0514 00:21:06.976958    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:08.855652    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:08.855652    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:08.855652    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:11.080615    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:11.080615    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:11.084377    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:11.084940    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:11.084940    4316 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-101100-m03 && echo "multinode-101100-m03" | sudo tee /etc/hostname
	I0514 00:21:11.255633    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101100-m03
	
	I0514 00:21:11.255633    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:13.154922    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:13.154922    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:13.154922    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:15.398263    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:15.399017    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:15.402939    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:15.402939    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:15.402939    4316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-101100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-101100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-101100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 00:21:15.556115    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 00:21:15.556115    4316 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 00:21:15.556115    4316 buildroot.go:174] setting up certificates
	I0514 00:21:15.556115    4316 provision.go:84] configureAuth start
	I0514 00:21:15.556115    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:17.505754    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:17.505836    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:17.505836    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:19.771382    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:19.771604    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:19.771604    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:21.674514    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:21.675298    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:21.675298    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:23.945466    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:23.946417    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:23.946417    4316 provision.go:143] copyHostCerts
	I0514 00:21:23.946661    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0514 00:21:23.946894    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 00:21:23.946894    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 00:21:23.947291    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 00:21:23.948282    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0514 00:21:23.948520    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 00:21:23.948608    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 00:21:23.948879    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 00:21:23.949724    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0514 00:21:23.949966    4316 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 00:21:23.950070    4316 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 00:21:23.950193    4316 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 00:21:23.951665    4316 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-101100-m03 san=[127.0.0.1 172.23.111.37 localhost minikube multinode-101100-m03]
	I0514 00:21:24.145321    4316 provision.go:177] copyRemoteCerts
	I0514 00:21:24.156296    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 00:21:24.156405    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:26.044598    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:26.045653    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:26.045728    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:28.305311    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:28.305311    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:28.305507    4316 sshutil.go:53] new ssh client: &{IP:172.23.111.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m03\id_rsa Username:docker}
	I0514 00:21:28.413951    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2573821s)
	I0514 00:21:28.413951    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0514 00:21:28.413951    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 00:21:28.456658    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0514 00:21:28.456658    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0514 00:21:28.500816    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0514 00:21:28.500816    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0514 00:21:28.545337    4316 provision.go:87] duration metric: took 12.9883902s to configureAuth
	I0514 00:21:28.545337    4316 buildroot.go:189] setting minikube options for container-runtime
	I0514 00:21:28.546226    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:21:28.546350    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:30.413910    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:30.413910    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:30.413910    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:32.654867    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:32.654867    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:32.658254    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:32.658845    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:32.658845    4316 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 00:21:32.802245    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 00:21:32.802245    4316 buildroot.go:70] root file system type: tmpfs
	I0514 00:21:32.802245    4316 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 00:21:32.802797    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:34.691259    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:34.691259    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:34.691341    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:36.944951    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:36.944951    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:36.948633    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:36.948633    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:36.949469    4316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.102.122"
	Environment="NO_PROXY=172.23.102.122,172.23.97.128"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 00:21:37.105736    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.102.122
	Environment=NO_PROXY=172.23.102.122,172.23.97.128
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 00:21:37.105736    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:38.987690    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:38.987690    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:38.987690    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:41.189935    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:41.189935    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:41.194292    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:41.194772    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:41.194772    4316 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 00:21:43.378819    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 00:21:43.378880    4316 machine.go:97] duration metric: took 40.6871503s to provisionDockerMachine
	I0514 00:21:43.378918    4316 start.go:293] postStartSetup for "multinode-101100-m03" (driver="hyperv")
	I0514 00:21:43.378918    4316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 00:21:43.387915    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 00:21:43.387915    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:45.259582    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:45.259582    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:45.260125    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:47.508138    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:47.508854    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:47.509144    4316 sshutil.go:53] new ssh client: &{IP:172.23.111.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m03\id_rsa Username:docker}
	I0514 00:21:47.621925    4316 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2333264s)
	I0514 00:21:47.630787    4316 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 00:21:47.636687    4316 command_runner.go:130] > NAME=Buildroot
	I0514 00:21:47.636828    4316 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0514 00:21:47.636894    4316 command_runner.go:130] > ID=buildroot
	I0514 00:21:47.636956    4316 command_runner.go:130] > VERSION_ID=2023.02.9
	I0514 00:21:47.637013    4316 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0514 00:21:47.637193    4316 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 00:21:47.637247    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 00:21:47.637507    4316 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 00:21:47.638144    4316 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 00:21:47.638144    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /etc/ssl/certs/59842.pem
	I0514 00:21:47.647072    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 00:21:47.662813    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 00:21:47.705663    4316 start.go:296] duration metric: took 4.3264685s for postStartSetup
	I0514 00:21:47.705663    4316 fix.go:56] duration metric: took 1m19.1788045s for fixHost
	I0514 00:21:47.705770    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:49.581897    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:49.581897    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:49.581897    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:51.819389    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:51.819389    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:51.824010    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:51.824349    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:51.824416    4316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 00:21:51.954481    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715646112.184202835
	
	I0514 00:21:51.954481    4316 fix.go:216] guest clock: 1715646112.184202835
	I0514 00:21:51.954481    4316 fix.go:229] Guest: 2024-05-14 00:21:52.184202835 +0000 UTC Remote: 2024-05-14 00:21:47.7056639 +0000 UTC m=+411.614762401 (delta=4.478538935s)
	I0514 00:21:51.954481    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:53.836606    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:53.836606    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:53.836606    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:21:56.092057    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:21:56.092753    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:56.096518    4316 main.go:141] libmachine: Using SSH client type: native
	I0514 00:21:56.096589    4316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.37 22 <nil> <nil>}
	I0514 00:21:56.096589    4316 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715646111
	I0514 00:21:56.248205    4316 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 00:21:51 UTC 2024
	
	I0514 00:21:56.249225    4316 fix.go:236] clock set: Tue May 14 00:21:51 UTC 2024
	 (err=<nil>)
	I0514 00:21:56.249225    4316 start.go:83] releasing machines lock for "multinode-101100-m03", held for 1m27.7219102s
	I0514 00:21:56.249225    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:21:58.121332    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:21:58.121332    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:21:58.122089    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:00.351479    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:22:00.352302    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:00.355130    4316 out.go:177] * Found network options:
	I0514 00:22:00.357987    4316 out.go:177]   - NO_PROXY=172.23.102.122,172.23.97.128
	W0514 00:22:00.360358    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	W0514 00:22:00.360358    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	I0514 00:22:00.362628    4316 out.go:177]   - NO_PROXY=172.23.102.122,172.23.97.128
	W0514 00:22:00.364886    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	W0514 00:22:00.364886    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	W0514 00:22:00.366343    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	W0514 00:22:00.366343    4316 proxy.go:119] fail to check proxy env: Error ip not in block
	I0514 00:22:00.367654    4316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 00:22:00.367654    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:22:00.375693    4316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0514 00:22:00.375693    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:22:02.356924    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:02.357124    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:02.357218    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:02.362219    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:02.362219    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:02.362756    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:04.713971    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:22:04.713971    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:04.714432    4316 sshutil.go:53] new ssh client: &{IP:172.23.111.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m03\id_rsa Username:docker}
	I0514 00:22:04.735525    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:22:04.735934    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:04.736319    4316 sshutil.go:53] new ssh client: &{IP:172.23.111.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m03\id_rsa Username:docker}
	I0514 00:22:04.809488    4316 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0514 00:22:04.810145    4316 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4340992s)
	W0514 00:22:04.810145    4316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 00:22:04.818898    4316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 00:22:04.886656    4316 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0514 00:22:04.886826    4316 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5188182s)
	I0514 00:22:04.886842    4316 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0514 00:22:04.886953    4316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 00:22:04.886953    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:22:04.887296    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:22:04.921794    4316 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0514 00:22:04.931734    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 00:22:04.963270    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 00:22:04.986530    4316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 00:22:04.999807    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 00:22:05.029380    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:22:05.058352    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 00:22:05.083622    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 00:22:05.112998    4316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 00:22:05.142933    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 00:22:05.171495    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 00:22:05.198510    4316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 00:22:05.224684    4316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 00:22:05.241590    4316 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0514 00:22:05.251440    4316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 00:22:05.277900    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:05.461282    4316 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 00:22:05.490623    4316 start.go:494] detecting cgroup driver to use...
	I0514 00:22:05.500207    4316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 00:22:05.523447    4316 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0514 00:22:05.523447    4316 command_runner.go:130] > [Unit]
	I0514 00:22:05.523447    4316 command_runner.go:130] > Description=Docker Application Container Engine
	I0514 00:22:05.523447    4316 command_runner.go:130] > Documentation=https://docs.docker.com
	I0514 00:22:05.523447    4316 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0514 00:22:05.523447    4316 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0514 00:22:05.523447    4316 command_runner.go:130] > StartLimitBurst=3
	I0514 00:22:05.523447    4316 command_runner.go:130] > StartLimitIntervalSec=60
	I0514 00:22:05.523447    4316 command_runner.go:130] > [Service]
	I0514 00:22:05.523447    4316 command_runner.go:130] > Type=notify
	I0514 00:22:05.523447    4316 command_runner.go:130] > Restart=on-failure
	I0514 00:22:05.523447    4316 command_runner.go:130] > Environment=NO_PROXY=172.23.102.122
	I0514 00:22:05.523447    4316 command_runner.go:130] > Environment=NO_PROXY=172.23.102.122,172.23.97.128
	I0514 00:22:05.523447    4316 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0514 00:22:05.523447    4316 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0514 00:22:05.523447    4316 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0514 00:22:05.523447    4316 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0514 00:22:05.523447    4316 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0514 00:22:05.523447    4316 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0514 00:22:05.523447    4316 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0514 00:22:05.523447    4316 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0514 00:22:05.523447    4316 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0514 00:22:05.523447    4316 command_runner.go:130] > ExecStart=
	I0514 00:22:05.523447    4316 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0514 00:22:05.524447    4316 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0514 00:22:05.524447    4316 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0514 00:22:05.524447    4316 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0514 00:22:05.524447    4316 command_runner.go:130] > LimitNOFILE=infinity
	I0514 00:22:05.524447    4316 command_runner.go:130] > LimitNPROC=infinity
	I0514 00:22:05.524447    4316 command_runner.go:130] > LimitCORE=infinity
	I0514 00:22:05.524447    4316 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0514 00:22:05.524447    4316 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0514 00:22:05.524447    4316 command_runner.go:130] > TasksMax=infinity
	I0514 00:22:05.524447    4316 command_runner.go:130] > TimeoutStartSec=0
	I0514 00:22:05.524447    4316 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0514 00:22:05.524447    4316 command_runner.go:130] > Delegate=yes
	I0514 00:22:05.524447    4316 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0514 00:22:05.524447    4316 command_runner.go:130] > KillMode=process
	I0514 00:22:05.524447    4316 command_runner.go:130] > [Install]
	I0514 00:22:05.524447    4316 command_runner.go:130] > WantedBy=multi-user.target
	I0514 00:22:05.533931    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:22:05.567981    4316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 00:22:05.603770    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 00:22:05.637643    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:22:05.669362    4316 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 00:22:05.728890    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 00:22:05.756769    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 00:22:05.798538    4316 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0514 00:22:05.807501    4316 ssh_runner.go:195] Run: which cri-dockerd
	I0514 00:22:05.813646    4316 command_runner.go:130] > /usr/bin/cri-dockerd
	I0514 00:22:05.821747    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 00:22:05.838769    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 00:22:05.879429    4316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 00:22:06.061305    4316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 00:22:06.245852    4316 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 00:22:06.245965    4316 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 00:22:06.287299    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:06.473998    4316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 00:22:09.055475    4316 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5813119s)
	I0514 00:22:09.066661    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 00:22:09.097427    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:22:09.129009    4316 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 00:22:09.311080    4316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 00:22:09.498124    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:09.671539    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 00:22:09.706431    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 00:22:09.736219    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:09.922310    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 00:22:10.020923    4316 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 00:22:10.030714    4316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 00:22:10.038675    4316 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0514 00:22:10.038675    4316 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0514 00:22:10.038675    4316 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0514 00:22:10.038675    4316 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0514 00:22:10.038675    4316 command_runner.go:130] > Access: 2024-05-14 00:22:10.177592384 +0000
	I0514 00:22:10.038675    4316 command_runner.go:130] > Modify: 2024-05-14 00:22:10.177592384 +0000
	I0514 00:22:10.038675    4316 command_runner.go:130] > Change: 2024-05-14 00:22:10.181592534 +0000
	I0514 00:22:10.038675    4316 command_runner.go:130] >  Birth: -
	I0514 00:22:10.038675    4316 start.go:562] Will wait 60s for crictl version
	I0514 00:22:10.045705    4316 ssh_runner.go:195] Run: which crictl
	I0514 00:22:10.052082    4316 command_runner.go:130] > /usr/bin/crictl
	I0514 00:22:10.061346    4316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 00:22:10.113065    4316 command_runner.go:130] > Version:  0.1.0
	I0514 00:22:10.113065    4316 command_runner.go:130] > RuntimeName:  docker
	I0514 00:22:10.113156    4316 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0514 00:22:10.113156    4316 command_runner.go:130] > RuntimeApiVersion:  v1
	I0514 00:22:10.113214    4316 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 00:22:10.122534    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:22:10.161688    4316 command_runner.go:130] > 26.0.2
	I0514 00:22:10.167681    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 00:22:10.196989    4316 command_runner.go:130] > 26.0.2
	I0514 00:22:10.199755    4316 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 00:22:10.203718    4316 out.go:177]   - env NO_PROXY=172.23.102.122
	I0514 00:22:10.205749    4316 out.go:177]   - env NO_PROXY=172.23.102.122,172.23.97.128
	I0514 00:22:10.207617    4316 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 00:22:10.211419    4316 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 00:22:10.211419    4316 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 00:22:10.211419    4316 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 00:22:10.211419    4316 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 00:22:10.214411    4316 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 00:22:10.214411    4316 ip.go:210] interface addr: 172.23.96.1/20
	I0514 00:22:10.223817    4316 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 00:22:10.229778    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:22:10.249542    4316 mustload.go:65] Loading cluster: multinode-101100
	I0514 00:22:10.249992    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:22:10.250906    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:22:12.127984    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:12.128682    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:12.128682    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:22:12.129430    4316 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-101100 for IP: 172.23.111.37
	I0514 00:22:12.129430    4316 certs.go:194] generating shared ca certs ...
	I0514 00:22:12.129430    4316 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 00:22:12.129952    4316 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 00:22:12.130258    4316 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 00:22:12.130346    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0514 00:22:12.130537    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0514 00:22:12.130697    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0514 00:22:12.130723    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0514 00:22:12.131165    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 00:22:12.131440    4316 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 00:22:12.131513    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 00:22:12.131741    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 00:22:12.131893    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 00:22:12.132122    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 00:22:12.132470    4316 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 00:22:12.132586    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:22:12.132745    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem -> /usr/share/ca-certificates/5984.pem
	I0514 00:22:12.132822    4316 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> /usr/share/ca-certificates/59842.pem
	I0514 00:22:12.133041    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 00:22:12.184529    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 00:22:12.244756    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 00:22:12.297173    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 00:22:12.345941    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 00:22:12.391896    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 00:22:12.434600    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 00:22:12.492171    4316 ssh_runner.go:195] Run: openssl version
	I0514 00:22:12.501302    4316 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0514 00:22:12.511793    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 00:22:12.536786    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 00:22:12.543890    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:22:12.543972    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 00:22:12.553635    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 00:22:12.561375    4316 command_runner.go:130] > 3ec20f2e
	I0514 00:22:12.569818    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 00:22:12.597665    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 00:22:12.622930    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:22:12.629962    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:22:12.630044    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:22:12.642059    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 00:22:12.650748    4316 command_runner.go:130] > b5213941
	I0514 00:22:12.661540    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 00:22:12.690067    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 00:22:12.716662    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 00:22:12.724120    4316 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:22:12.724288    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 00:22:12.733760    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 00:22:12.741790    4316 command_runner.go:130] > 51391683
	I0514 00:22:12.750627    4316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 00:22:12.776716    4316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 00:22:12.783486    4316 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 00:22:12.783486    4316 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 00:22:12.783486    4316 kubeadm.go:928] updating node {m03 172.23.111.37 0 v1.30.0  false true} ...
	I0514 00:22:12.784100    4316 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-101100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.111.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0514 00:22:12.792376    4316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 00:22:12.809150    4316 command_runner.go:130] > kubeadm
	I0514 00:22:12.809150    4316 command_runner.go:130] > kubectl
	I0514 00:22:12.809150    4316 command_runner.go:130] > kubelet
	I0514 00:22:12.809150    4316 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 00:22:12.818264    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0514 00:22:12.837354    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0514 00:22:12.869525    4316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 00:22:12.907971    4316 ssh_runner.go:195] Run: grep 172.23.102.122	control-plane.minikube.internal$ /etc/hosts
	I0514 00:22:12.914521    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.102.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 00:22:12.941381    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:13.133158    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:22:13.161650    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:22:13.162414    4316 start.go:316] joinCluster: &{Name:multinode-101100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-101100 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.102.122 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.97.128 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.111.37 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 00:22:13.162414    4316 start.go:329] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.23.111.37 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0514 00:22:13.162414    4316 host.go:66] Checking if "multinode-101100-m03" exists ...
	I0514 00:22:13.163191    4316 mustload.go:65] Loading cluster: multinode-101100
	I0514 00:22:13.163628    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:22:13.164073    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:22:15.084048    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:15.084048    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:15.085030    4316 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:22:15.085491    4316 api_server.go:166] Checking apiserver status ...
	I0514 00:22:15.093395    4316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:22:15.093395    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:22:17.036257    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:17.036447    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:17.036527    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:19.326564    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:22:19.327171    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:19.327171    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:22:19.429259    4316 command_runner.go:130] > 1838
	I0514 00:22:19.429396    4316 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.3355874s)
	I0514 00:22:19.437807    4316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1838/cgroup
	W0514 00:22:19.458736    4316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1838/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0514 00:22:19.467719    4316 ssh_runner.go:195] Run: ls
	I0514 00:22:19.474727    4316 api_server.go:253] Checking apiserver healthz at https://172.23.102.122:8443/healthz ...
	I0514 00:22:19.481611    4316 api_server.go:279] https://172.23.102.122:8443/healthz returned 200:
	ok
	I0514 00:22:19.490951    4316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-101100-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0514 00:22:19.642298    4316 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-tfbt8, kube-system/kube-proxy-8zsgn
	I0514 00:22:19.643826    4316 command_runner.go:130] > node/multinode-101100-m03 cordoned
	I0514 00:22:19.644549    4316 command_runner.go:130] > node/multinode-101100-m03 drained
	I0514 00:22:19.644717    4316 node.go:128] successfully drained node "multinode-101100-m03"
	I0514 00:22:19.644717    4316 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0514 00:22:19.644848    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:22:21.533290    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:21.533369    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:21.533369    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m03 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:23.781215    4316 main.go:141] libmachine: [stdout =====>] : 172.23.111.37
	
	I0514 00:22:23.781215    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:23.782328    4316 sshutil.go:53] new ssh client: &{IP:172.23.111.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m03\id_rsa Username:docker}
	I0514 00:22:24.169698    4316 command_runner.go:130] ! W0514 00:22:24.402117    1486 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0514 00:22:24.530628    4316 command_runner.go:130] > [preflight] Running pre-flight checks
	I0514 00:22:24.530679    4316 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0514 00:22:24.530679    4316 command_runner.go:130] > [reset] Stopping the kubelet service
	I0514 00:22:24.530719    4316 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0514 00:22:24.530719    4316 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0514 00:22:24.530751    4316 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0514 00:22:24.530751    4316 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0514 00:22:24.530751    4316 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0514 00:22:24.530801    4316 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0514 00:22:24.530801    4316 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0514 00:22:24.530801    4316 command_runner.go:130] > to reset your system's IPVS tables.
	I0514 00:22:24.530801    4316 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0514 00:22:24.530801    4316 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0514 00:22:24.530801    4316 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.8856942s)
	I0514 00:22:24.530995    4316 node.go:155] successfully reset node "multinode-101100-m03"
	I0514 00:22:24.531797    4316 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:22:24.532444    4316 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.102.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 00:22:24.533198    4316 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0514 00:22:24.533263    4316 round_trippers.go:463] DELETE https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:24.533263    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:24.533263    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:24.533326    4316 round_trippers.go:473]     Content-Type: application/json
	I0514 00:22:24.533326    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:24.550241    4316 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0514 00:22:24.550241    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:24.550241    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:24.550241    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:24.550241    4316 round_trippers.go:580]     Content-Length: 171
	I0514 00:22:24.550241    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:24 GMT
	I0514 00:22:24.550241    4316 round_trippers.go:580]     Audit-Id: a88d2b44-64bb-4987-a7d0-c03092b9e2e3
	I0514 00:22:24.550241    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:24.550241    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:24.550241    4316 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-101100-m03","kind":"nodes","uid":"fd2d4a0b-dc97-4959-b2ba-0f51719ad2b3"}}
	I0514 00:22:24.550840    4316 node.go:180] successfully deleted node "multinode-101100-m03"
	I0514 00:22:24.550840    4316 start.go:333] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.23.111.37 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0514 00:22:24.550930    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0514 00:22:24.551007    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:22:26.445965    4316 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:22:26.445965    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:26.446918    4316 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:22:28.699598    4316 main.go:141] libmachine: [stdout =====>] : 172.23.102.122
	
	I0514 00:22:28.699598    4316 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:22:28.699598    4316 sshutil.go:53] new ssh client: &{IP:172.23.102.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:22:28.886585    4316 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token j355gq.bh21j9sltd7tgxsw --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0514 00:22:28.886585    4316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3353782s)
	I0514 00:22:28.887584    4316 start.go:342] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.23.111.37 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0514 00:22:28.887584    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j355gq.bh21j9sltd7tgxsw --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-101100-m03"
	I0514 00:22:29.086610    4316 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0514 00:22:30.422942    4316 command_runner.go:130] > [preflight] Running pre-flight checks
	I0514 00:22:30.423024    4316 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0514 00:22:30.423024    4316 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0514 00:22:30.423024    4316 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0514 00:22:30.423024    4316 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0514 00:22:30.423024    4316 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0514 00:22:30.423138    4316 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0514 00:22:30.423138    4316 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002661537s
	I0514 00:22:30.423138    4316 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0514 00:22:30.423138    4316 command_runner.go:130] > This node has joined the cluster:
	I0514 00:22:30.423211    4316 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0514 00:22:30.423211    4316 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0514 00:22:30.423273    4316 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0514 00:22:30.423273    4316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j355gq.bh21j9sltd7tgxsw --discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-101100-m03": (1.5355913s)
	I0514 00:22:30.423360    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0514 00:22:30.625570    4316 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0514 00:22:30.829669    4316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-101100-m03 minikube.k8s.io/updated_at=2024_05_14T00_22_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=multinode-101100 minikube.k8s.io/primary=false
	I0514 00:22:30.962568    4316 command_runner.go:130] > node/multinode-101100-m03 labeled
	I0514 00:22:30.962696    4316 start.go:318] duration metric: took 17.7991448s to joinCluster
	I0514 00:22:30.963023    4316 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.23.111.37 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0514 00:22:30.966858    4316 out.go:177] * Verifying Kubernetes components...
	I0514 00:22:30.963921    4316 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:22:30.977741    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 00:22:31.178666    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 00:22:31.205179    4316 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 00:22:31.206161    4316 kapi.go:59] client config for multinode-101100: &rest.Config{Host:"https://172.23.102.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-101100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 00:22:31.207253    4316 node_ready.go:35] waiting up to 6m0s for node "multinode-101100-m03" to be "Ready" ...
	I0514 00:22:31.207253    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:31.207253    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:31.207253    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:31.207253    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:31.213710    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:22:31.213710    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:31.214680    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:31.214680    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:31.214680    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:31 GMT
	I0514 00:22:31.214680    4316 round_trippers.go:580]     Audit-Id: 5fc9ab20-804d-4d36-8ac1-22507b3fd9e3
	I0514 00:22:31.214680    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:31.214680    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:31.214680    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2181","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3396 chars]
	I0514 00:22:31.722426    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:31.722426    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:31.722481    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:31.722481    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:31.724955    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:31.724955    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:31.724955    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:31.725671    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:31.725671    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:31 GMT
	I0514 00:22:31.725671    4316 round_trippers.go:580]     Audit-Id: eafaf302-743a-4936-b61e-b6eb0ae95a14
	I0514 00:22:31.725671    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:31.725671    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:31.726110    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2181","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3396 chars]
	I0514 00:22:32.210573    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:32.210638    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:32.210638    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:32.210638    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:32.216109    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:22:32.216109    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:32.216109    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:32.216109    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:32.216109    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:32.216109    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:32 GMT
	I0514 00:22:32.216109    4316 round_trippers.go:580]     Audit-Id: 88352a26-6350-4fb6-904a-cd30eeb911b9
	I0514 00:22:32.216109    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:32.216827    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2181","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3396 chars]
	I0514 00:22:32.713324    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:32.713324    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:32.713324    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:32.713324    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:32.720032    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:22:32.720032    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:32.720032    4316 round_trippers.go:580]     Audit-Id: 559f72a3-3e52-4bac-9e0f-ec11ed30a4f2
	I0514 00:22:32.720032    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:32.720032    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:32.720032    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:32.720032    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:32.720032    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:32 GMT
	I0514 00:22:32.720735    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2181","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3396 chars]
	I0514 00:22:33.218838    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:33.218930    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:33.218952    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:33.218952    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:33.221523    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:33.221523    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:33.221523    4316 round_trippers.go:580]     Audit-Id: 7d54da2c-5ce5-4046-a307-f3e8aaec8f56
	I0514 00:22:33.221523    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:33.221523    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:33.221523    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:33.221523    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:33.221523    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:33 GMT
	I0514 00:22:33.221523    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2190","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3505 chars]
	I0514 00:22:33.221523    4316 node_ready.go:53] node "multinode-101100-m03" has status "Ready":"False"
	I0514 00:22:33.723609    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:33.723717    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:33.723796    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:33.723796    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:33.727553    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:33.727668    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:33.727723    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:33 GMT
	I0514 00:22:33.727723    4316 round_trippers.go:580]     Audit-Id: 1cfb0054-dddf-43df-8341-d8c807f9aa61
	I0514 00:22:33.727723    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:33.727723    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:33.727762    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:33.727762    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:33.727889    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2190","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3505 chars]
	I0514 00:22:34.208454    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:34.208454    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:34.208454    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:34.208454    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:34.214189    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:22:34.214189    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:34.214189    4316 round_trippers.go:580]     Audit-Id: 45699a1f-eb0b-40d6-ba20-a075773242c7
	I0514 00:22:34.214189    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:34.214189    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:34.214189    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:34.214189    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:34.214189    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:34 GMT
	I0514 00:22:34.214717    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2190","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3505 chars]
	I0514 00:22:34.708540    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:34.708763    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:34.708763    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:34.708763    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:34.712480    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:34.712480    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:34.712480    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:34.712480    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:34 GMT
	I0514 00:22:34.712480    4316 round_trippers.go:580]     Audit-Id: 67bedaf4-a410-48e5-86f6-c8d0307f2a0e
	I0514 00:22:34.712480    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:34.712480    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:34.712480    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:34.712480    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2190","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3505 chars]
	I0514 00:22:35.222577    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:35.222577    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.222684    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.222684    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.225748    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.225748    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.225748    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.226096    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.226096    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.226096    4316 round_trippers.go:580]     Audit-Id: d38347b4-a927-45e5-ba00-b0a03178f484
	I0514 00:22:35.226096    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.226096    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.226223    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2204","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3763 chars]
	I0514 00:22:35.226533    4316 node_ready.go:49] node "multinode-101100-m03" has status "Ready":"True"
	I0514 00:22:35.226654    4316 node_ready.go:38] duration metric: took 4.0191445s for node "multinode-101100-m03" to be "Ready" ...
	I0514 00:22:35.226654    4316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:22:35.226777    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods
	I0514 00:22:35.226777    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.226777    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.226777    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.231623    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:22:35.231623    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.231623    4316 round_trippers.go:580]     Audit-Id: d3610563-9ebf-47da-acc9-11fb4e5a3dd4
	I0514 00:22:35.231623    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.231694    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.231694    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.231694    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.231694    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.233208    4316 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2204"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85700 chars]
	I0514 00:22:35.236507    4316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.236507    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4kmx4
	I0514 00:22:35.236507    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.236507    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.236507    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.239097    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:35.240094    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.240115    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.240115    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.240115    4316 round_trippers.go:580]     Audit-Id: 0674e198-bf3c-4b75-aa06-6aa2baa1467b
	I0514 00:22:35.240115    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.240115    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.240115    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.240175    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-4kmx4","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"06858a47-f51b-48d8-a2a6-f60b8107be13","resourceVersion":"1851","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"960ba3f3-a236-42ed-9323-b1a388cfac2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"960ba3f3-a236-42ed-9323-b1a388cfac2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0514 00:22:35.240175    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:35.240175    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.240175    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.240175    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.243286    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.243286    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.243286    4316 round_trippers.go:580]     Audit-Id: 085d45b6-d3c1-45cd-a1c1-f640176b3b92
	I0514 00:22:35.243286    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.243286    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.243286    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.243286    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.243286    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.244257    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:35.244257    4316 pod_ready.go:92] pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:35.244257    4316 pod_ready.go:81] duration metric: took 7.7488ms for pod "coredns-7db6d8ff4d-4kmx4" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.244257    4316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.244257    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-101100
	I0514 00:22:35.244257    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.244257    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.244257    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.247142    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:35.247142    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.247142    4316 round_trippers.go:580]     Audit-Id: e4f6db5d-1943-416a-b87a-c378d4270193
	I0514 00:22:35.247142    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.247334    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.247334    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.247334    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.247334    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.247493    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-101100","namespace":"kube-system","uid":"74cd34fe-a56b-453d-afb3-a9db3db0d5ba","resourceVersion":"1779","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.102.122:2379","kubernetes.io/config.hash":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.mirror":"62d8afc7714e8ab65bff9675d120bb67","kubernetes.io/config.seen":"2024-05-14T00:16:49.843121737Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0514 00:22:35.247942    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:35.248003    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.248003    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.248003    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.251118    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.251118    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.251118    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.251118    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.251118    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.251230    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.251230    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.251230    4316 round_trippers.go:580]     Audit-Id: 90d8dde8-8ccd-4894-a935-03e55fb5d5c0
	I0514 00:22:35.252674    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:35.253138    4316 pod_ready.go:92] pod "etcd-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:35.253171    4316 pod_ready.go:81] duration metric: took 8.8806ms for pod "etcd-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.253171    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.253278    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-101100
	I0514 00:22:35.253311    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.253311    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.253311    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.257363    4316 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0514 00:22:35.257363    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.257363    4316 round_trippers.go:580]     Audit-Id: 86901156-fbfa-45ec-bee4-58bd5f849dd7
	I0514 00:22:35.257363    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.257363    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.257363    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.257363    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.257363    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.257363    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-101100","namespace":"kube-system","uid":"60889645-4c2d-4cfc-b322-c0f1b6e34503","resourceVersion":"1775","creationTimestamp":"2024-05-14T00:16:55Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.102.122:8443","kubernetes.io/config.hash":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.mirror":"378d61cf78af695f1df41e321907a84d","kubernetes.io/config.seen":"2024-05-14T00:16:49.778409853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:16:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0514 00:22:35.259276    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:35.259276    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.259276    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.259276    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.261298    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:35.261298    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.261298    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.261298    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.261298    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.261298    4316 round_trippers.go:580]     Audit-Id: 88f7d8b2-32c3-472f-a6d0-56c97edff491
	I0514 00:22:35.261298    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.261298    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.261298    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:35.262290    4316 pod_ready.go:92] pod "kube-apiserver-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:35.262290    4316 pod_ready.go:81] duration metric: took 9.1183ms for pod "kube-apiserver-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.262290    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.262290    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-101100
	I0514 00:22:35.262290    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.262290    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.262290    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.265590    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.265590    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.265590    4316 round_trippers.go:580]     Audit-Id: 9d8976cb-6f02-4632-9976-dab069dbc7d6
	I0514 00:22:35.265590    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.265590    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.265590    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.265590    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.265590    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.265590    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-101100","namespace":"kube-system","uid":"1a74381a-7477-4fd3-b344-c4a230014f97","resourceVersion":"1752","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.mirror":"5393de2704b2efef461d22fa52aa93c8","kubernetes.io/config.seen":"2024-05-13T23:56:09.392106640Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0514 00:22:35.266396    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:35.266396    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.266396    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.266396    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.268170    4316 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0514 00:22:35.268170    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.268170    4316 round_trippers.go:580]     Audit-Id: 1bee75b3-a93d-4f96-b61c-47facc6def52
	I0514 00:22:35.268170    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.268170    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.268170    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.268170    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.268170    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.268170    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:35.268170    4316 pod_ready.go:92] pod "kube-controller-manager-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:35.268170    4316 pod_ready.go:81] duration metric: took 5.8799ms for pod "kube-controller-manager-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.268170    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.428006    4316 request.go:629] Waited for 158.6721ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:22:35.428385    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zsgn
	I0514 00:22:35.428418    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.428461    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.428461    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.432383    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.432383    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.432383    4316 round_trippers.go:580]     Audit-Id: 72020528-bfeb-44ba-8bb6-c52684e32a80
	I0514 00:22:35.432383    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.432383    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.432383    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.432454    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.432454    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.433049    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8zsgn","generateName":"kube-proxy-","namespace":"kube-system","uid":"af208cbd-fa8a-4822-9b19-dc30f63fa59c","resourceVersion":"2194","creationTimestamp":"2024-05-14T00:03:17Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:03:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5837 chars]
	I0514 00:22:35.629181    4316 request.go:629] Waited for 195.0781ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:35.629499    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m03
	I0514 00:22:35.629499    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.629499    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.629499    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.632999    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:35.632999    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.632999    4316 round_trippers.go:580]     Audit-Id: 6eaa92e9-46ec-48a7-827b-273470d0a01c
	I0514 00:22:35.632999    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.632999    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.632999    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.632999    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.632999    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:35 GMT
	I0514 00:22:35.632999    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m03","uid":"950aa8d1-19df-4c88-9945-14378ec5f191","resourceVersion":"2204","creationTimestamp":"2024-05-14T00:22:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_22_30_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:22:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3763 chars]
	I0514 00:22:35.633644    4316 pod_ready.go:92] pod "kube-proxy-8zsgn" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:35.633644    4316 pod_ready.go:81] duration metric: took 365.4504ms for pod "kube-proxy-8zsgn" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.633644    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:35.832685    4316 request.go:629] Waited for 198.9615ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:22:35.832685    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b25hq
	I0514 00:22:35.832685    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:35.832685    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:35.832685    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:35.835474    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:35.835474    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:35.836471    4316 round_trippers.go:580]     Audit-Id: d2783015-c132-4022-b205-8cb8470c898b
	I0514 00:22:35.836471    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:35.836471    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:35.836471    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:35.836471    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:35.836471    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:36 GMT
	I0514 00:22:35.836522    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-b25hq","generateName":"kube-proxy-","namespace":"kube-system","uid":"d39f5818-3e88-4162-a7ce-734ca28103bf","resourceVersion":"2012","creationTimestamp":"2024-05-13T23:59:02Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:59:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5837 chars]
	I0514 00:22:36.034928    4316 request.go:629] Waited for 197.5426ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:22:36.035422    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100-m02
	I0514 00:22:36.035422    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:36.035422    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:36.035422    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:36.041311    4316 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0514 00:22:36.041311    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:36.041311    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:36.041311    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:36 GMT
	I0514 00:22:36.041311    4316 round_trippers.go:580]     Audit-Id: 54799025-00b5-43de-8af8-02c05f6b1665
	I0514 00:22:36.041311    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:36.041311    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:36.041311    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:36.042063    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100-m02","uid":"295b8cab-ff01-4711-af9c-e17d6a2613d8","resourceVersion":"2032","creationTimestamp":"2024-05-14T00:20:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_14T00_20_20_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-14T00:20:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3812 chars]
	I0514 00:22:36.042620    4316 pod_ready.go:92] pod "kube-proxy-b25hq" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:36.042728    4316 pod_ready.go:81] duration metric: took 409.058ms for pod "kube-proxy-b25hq" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:36.042769    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:36.222713    4316 request.go:629] Waited for 179.8274ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:22:36.226296    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhcz6
	I0514 00:22:36.226296    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:36.226296    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:36.226296    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:36.232730    4316 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0514 00:22:36.232730    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:36.232730    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:36.232730    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:36.232730    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:36.232730    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:36 GMT
	I0514 00:22:36.232730    4316 round_trippers.go:580]     Audit-Id: 6d25e29d-d450-417d-84fb-2e2822e042d8
	I0514 00:22:36.232730    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:36.232907    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zhcz6","generateName":"kube-proxy-","namespace":"kube-system","uid":"a9a488af-41ba-47f3-87b0-5a2f062afad6","resourceVersion":"1732","creationTimestamp":"2024-05-13T23:56:23Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"28ea9bf5-a30e-426c-b781-eb7c4cc41005","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28ea9bf5-a30e-426c-b781-eb7c4cc41005\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0514 00:22:36.425650    4316 request.go:629] Waited for 191.7272ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:36.425650    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:36.425650    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:36.425650    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:36.425650    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:36.429348    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:36.429348    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:36.429668    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:36.429668    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:36.429668    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:36 GMT
	I0514 00:22:36.429668    4316 round_trippers.go:580]     Audit-Id: 6d69090f-b253-4afe-892d-6ba1e2ebf425
	I0514 00:22:36.429668    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:36.429668    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:36.430257    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:36.431007    4316 pod_ready.go:92] pod "kube-proxy-zhcz6" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:36.431092    4316 pod_ready.go:81] duration metric: took 388.2655ms for pod "kube-proxy-zhcz6" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:36.431092    4316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:36.630031    4316 request.go:629] Waited for 198.7442ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:22:36.630621    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-101100
	I0514 00:22:36.630621    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:36.630621    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:36.630829    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:36.634252    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:36.634716    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:36.634716    4316 round_trippers.go:580]     Audit-Id: c02198a9-1730-432f-bf91-5260c5f2b16b
	I0514 00:22:36.634716    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:36.634716    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:36.634716    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:36.634716    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:36.634826    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:36 GMT
	I0514 00:22:36.635201    4316 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-101100","namespace":"kube-system","uid":"d7300c2d-377f-4061-bd34-5f7593b7e827","resourceVersion":"1756","creationTimestamp":"2024-05-13T23:56:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.mirror":"8083abd658221f47cabf81a00c4ca98e","kubernetes.io/config.seen":"2024-05-13T23:56:09.392108241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-13T23:56:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0514 00:22:36.831672    4316 request.go:629] Waited for 195.5902ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:36.832176    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes/multinode-101100
	I0514 00:22:36.832255    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:36.832327    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:36.832327    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:36.835655    4316 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0514 00:22:36.836206    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:36.836206    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:37 GMT
	I0514 00:22:36.836206    4316 round_trippers.go:580]     Audit-Id: 641059ea-6761-4f4f-8867-f47b2d8b3932
	I0514 00:22:36.836206    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:36.836206    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:36.836206    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:36.836206    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:36.836421    4316 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-13T23:56:06Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0514 00:22:36.836916    4316 pod_ready.go:92] pod "kube-scheduler-multinode-101100" in "kube-system" namespace has status "Ready":"True"
	I0514 00:22:36.837020    4316 pod_ready.go:81] duration metric: took 405.8799ms for pod "kube-scheduler-multinode-101100" in "kube-system" namespace to be "Ready" ...
	I0514 00:22:36.837020    4316 pod_ready.go:38] duration metric: took 1.6102629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 00:22:36.837020    4316 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 00:22:36.845708    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 00:22:36.872997    4316 system_svc.go:56] duration metric: took 35.9749ms WaitForService to wait for kubelet
	I0514 00:22:36.873132    4316 kubeadm.go:576] duration metric: took 5.9096231s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 00:22:36.873196    4316 node_conditions.go:102] verifying NodePressure condition ...
	I0514 00:22:37.034158    4316 request.go:629] Waited for 160.867ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.102.122:8443/api/v1/nodes
	I0514 00:22:37.034398    4316 round_trippers.go:463] GET https://172.23.102.122:8443/api/v1/nodes
	I0514 00:22:37.034398    4316 round_trippers.go:469] Request Headers:
	I0514 00:22:37.034482    4316 round_trippers.go:473]     Accept: application/json, */*
	I0514 00:22:37.034482    4316 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0514 00:22:37.037224    4316 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0514 00:22:37.038199    4316 round_trippers.go:577] Response Headers:
	I0514 00:22:37.038199    4316 round_trippers.go:580]     Audit-Id: 0bba16d2-0dff-472f-9e7a-5eb6c7dd1a4d
	I0514 00:22:37.038199    4316 round_trippers.go:580]     Cache-Control: no-cache, private
	I0514 00:22:37.038199    4316 round_trippers.go:580]     Content-Type: application/json
	I0514 00:22:37.038199    4316 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1495848d-29ed-4811-bb84-192c6acef94c
	I0514 00:22:37.038199    4316 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3e16ff31-094d-4bd4-997d-3f2c98ff0c0a
	I0514 00:22:37.038199    4316 round_trippers.go:580]     Date: Tue, 14 May 2024 00:22:37 GMT
	I0514 00:22:37.038555    4316 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2206"},"items":[{"metadata":{"name":"multinode-101100","uid":"f7fee432-2e9a-47bb-9ce3-fa1a251d71c9","resourceVersion":"1825","creationTimestamp":"2024-05-13T23:56:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-101100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bf4e5d623f67cc0fbec852b09e6284e0ebf63761","minikube.k8s.io/name":"multinode-101100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_13T23_56_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14852 chars]
	I0514 00:22:37.039449    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:22:37.039530    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:22:37.039530    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:22:37.039530    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:22:37.039530    4316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 00:22:37.039530    4316 node_conditions.go:123] node cpu capacity is 2
	I0514 00:22:37.039530    4316 node_conditions.go:105] duration metric: took 166.3233ms to run NodePressure ...
	I0514 00:22:37.039530    4316 start.go:240] waiting for startup goroutines ...
	I0514 00:22:37.039618    4316 start.go:254] writing updated cluster config ...
	I0514 00:22:37.048078    4316 ssh_runner.go:195] Run: rm -f paused
	I0514 00:22:37.170898    4316 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0514 00:22:37.173942    4316 out.go:177] * Done! kubectl is now configured to use "multinode-101100" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 14 00:18:07 multinode-101100 dockerd[1049]: 2024/05/14 00:18:07 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:10 multinode-101100 dockerd[1049]: 2024/05/14 00:18:10 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:11 multinode-101100 dockerd[1049]: 2024/05/14 00:18:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:11 multinode-101100 dockerd[1049]: 2024/05/14 00:18:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:11 multinode-101100 dockerd[1049]: 2024/05/14 00:18:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:11 multinode-101100 dockerd[1049]: 2024/05/14 00:18:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:18:11 multinode-101100 dockerd[1049]: 2024/05/14 00:18:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:58 multinode-101100 dockerd[1049]: 2024/05/14 00:22:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:58 multinode-101100 dockerd[1049]: 2024/05/14 00:22:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:59 multinode-101100 dockerd[1049]: 2024/05/14 00:22:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:59 multinode-101100 dockerd[1049]: 2024/05/14 00:22:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:59 multinode-101100 dockerd[1049]: 2024/05/14 00:22:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:59 multinode-101100 dockerd[1049]: 2024/05/14 00:22:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:59 multinode-101100 dockerd[1049]: 2024/05/14 00:22:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:59 multinode-101100 dockerd[1049]: 2024/05/14 00:22:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:59 multinode-101100 dockerd[1049]: 2024/05/14 00:22:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:59 multinode-101100 dockerd[1049]: 2024/05/14 00:22:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:59 multinode-101100 dockerd[1049]: 2024/05/14 00:22:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 00:22:59 multinode-101100 dockerd[1049]: 2024/05/14 00:22:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d0b2f0362eb4       8c811b4aec35f                                                                                         5 minutes ago       Running             busybox                   1                   8cb9b6d6d0915       busybox-fc5497c4f-xqj6w
	dcc5a109288b6       cbb01a7bd410d                                                                                         5 minutes ago       Running             coredns                   1                   1cccb5e8cee3b       coredns-7db6d8ff4d-4kmx4
	bde84ba2d4ed7       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       2                   468a0e2976ae4       storage-provisioner
	2b424a7cd98c8       4950bb10b3f87                                                                                         6 minutes ago       Running             kindnet-cni               2                   5233e076edceb       kindnet-9q2tv
	b7d8d9a5e5eaf       4950bb10b3f87                                                                                         6 minutes ago       Exited              kindnet-cni               1                   5233e076edceb       kindnet-9q2tv
	b142687b621f1       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       1                   468a0e2976ae4       storage-provisioner
	b2a1b31cd7dee       a0bf559e280cf                                                                                         6 minutes ago       Running             kube-proxy                1                   a8ac60a565998       kube-proxy-zhcz6
	08450c853590d       3861cfcd7c04c                                                                                         6 minutes ago       Running             etcd                      0                   419648c0d4053       etcd-multinode-101100
	da9e6534cd87d       c42f13656d0b2                                                                                         6 minutes ago       Running             kube-apiserver            0                   509b8407e0955       kube-apiserver-multinode-101100
	d3581c1c570cf       259c8277fcbbc                                                                                         6 minutes ago       Running             kube-scheduler            1                   ddcaadef980ac       kube-scheduler-multinode-101100
	b87239d1199ab       c7aad43836fa5                                                                                         6 minutes ago       Running             kube-controller-manager   1                   659643d47b9ae       kube-controller-manager-multinode-101100
	57dea5416eb67       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   76d1b8ce19aba       busybox-fc5497c4f-xqj6w
	76c5ab7859eff       cbb01a7bd410d                                                                                         27 minutes ago      Exited              coredns                   0                   8bb49b28c842a       coredns-7db6d8ff4d-4kmx4
	91edaaa00da23       a0bf559e280cf                                                                                         27 minutes ago      Exited              kube-proxy                0                   9bd694480978f       kube-proxy-zhcz6
	e96f94398d6dd       c7aad43836fa5                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   da9268fd6556b       kube-controller-manager-multinode-101100
	964887fc5d362       259c8277fcbbc                                                                                         27 minutes ago      Exited              kube-scheduler            0                   fcb3b27edcd2a       kube-scheduler-multinode-101100
	
	
	==> coredns [76c5ab7859ef] <==
	[INFO] 10.244.0.3:52495 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000145803s
	[INFO] 10.244.0.3:46357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066702s
	[INFO] 10.244.0.3:41390 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062301s
	[INFO] 10.244.0.3:35739 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084301s
	[INFO] 10.244.0.3:44800 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163303s
	[INFO] 10.244.0.3:57631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068702s
	[INFO] 10.244.0.3:50842 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135702s
	[INFO] 10.244.1.2:41210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204604s
	[INFO] 10.244.1.2:57858 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073801s
	[INFO] 10.244.1.2:48782 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152303s
	[INFO] 10.244.1.2:36081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121002s
	[INFO] 10.244.0.3:46909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115002s
	[INFO] 10.244.0.3:36030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000220205s
	[INFO] 10.244.0.3:56187 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059401s
	[INFO] 10.244.0.3:51500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099802s
	[INFO] 10.244.1.2:57247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147903s
	[INFO] 10.244.1.2:46132 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170203s
	[INFO] 10.244.1.2:57206 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000452309s
	[INFO] 10.244.1.2:44795 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000146203s
	[INFO] 10.244.0.3:33385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082102s
	[INFO] 10.244.0.3:56742 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000173704s
	[INFO] 10.244.0.3:46927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185904s
	[INFO] 10.244.0.3:42956 - 5 "PTR IN 1.96.23.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000054801s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dcc5a109288b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53257 - 27032 "HINFO IN 6976640239659908905.245956973392320689. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05278328s
	
	
	==> describe nodes <==
	Name:               multinode-101100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=multinode-101100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_13T23_56_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 May 2024 23:56:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101100
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 May 2024 00:23:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 May 2024 00:22:41 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 May 2024 00:22:41 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 May 2024 00:22:41 +0000   Mon, 13 May 2024 23:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 May 2024 00:22:41 +0000   Tue, 14 May 2024 00:17:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.102.122
	  Hostname:    multinode-101100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5110a322e7104904905e303a94b950b6
	  System UUID:                9b23fe4d-6d34-444b-8185-a84d51d23610
	  Boot ID:                    2e73d191-2dbe-4055-a17d-cff8a9e53a15
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xqj6w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 coredns-7db6d8ff4d-4kmx4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-101100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m50s
	  kube-system                 kindnet-9q2tv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-101100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	  kube-system                 kube-controller-manager-multinode-101100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-zhcz6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-101100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 6m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	  Normal  NodeReady                27m                    kubelet          Node multinode-101100 status is now: NodeReady
	  Normal  Starting                 6m56s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m55s (x8 over 6m56s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s (x8 over 6m56s)  kubelet          Node multinode-101100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s (x7 over 6m56s)  kubelet          Node multinode-101100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m38s                  node-controller  Node multinode-101100 event: Registered Node multinode-101100 in Controller
	
	
	Name:               multinode-101100-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=multinode-101100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_14T00_20_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 May 2024 00:20:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 May 2024 00:23:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 May 2024 00:20:26 +0000   Tue, 14 May 2024 00:20:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 May 2024 00:20:26 +0000   Tue, 14 May 2024 00:20:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 May 2024 00:20:26 +0000   Tue, 14 May 2024 00:20:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 May 2024 00:20:26 +0000   Tue, 14 May 2024 00:20:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.97.128
	  Hostname:    multinode-101100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7eac7377d3bb4e40acf99c8af02c1e3b
	  System UUID:                4330851b-5248-f245-9378-5fc25e670b55
	  Boot ID:                    333163f1-b084-4523-b207-0d343c1c025a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5rj9g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kindnet-2lwsm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-b25hq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m23s                  kube-proxy       
	  Normal  Starting                 24m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)      kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)      kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)      kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                24m                    kubelet          Node multinode-101100-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m26s (x2 over 3m26s)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m26s (x2 over 3m26s)  kubelet          Node multinode-101100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m26s (x2 over 3m26s)  kubelet          Node multinode-101100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m23s                  node-controller  Node multinode-101100-m02 event: Registered Node multinode-101100-m02 in Controller
	  Normal  NodeReady                3m19s                  kubelet          Node multinode-101100-m02 status is now: NodeReady
	
	
	Name:               multinode-101100-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=multinode-101100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_14T00_22_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 May 2024 00:22:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 May 2024 00:23:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 May 2024 00:22:35 +0000   Tue, 14 May 2024 00:22:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 May 2024 00:22:35 +0000   Tue, 14 May 2024 00:22:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 May 2024 00:22:35 +0000   Tue, 14 May 2024 00:22:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 May 2024 00:22:35 +0000   Tue, 14 May 2024 00:22:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.111.37
	  Hostname:    multinode-101100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a8bef3345214f33927ed9bf1f9a1561
	  System UUID:                0ee228e5-87a6-0549-9a8d-1747b73431ee
	  Boot ID:                    e676460f-3a83-4ead-9990-8f26c0c78374
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-tfbt8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-proxy-8zsgn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 71s                kube-proxy       
	  Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                20m                kubelet          Node multinode-101100-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeReady                10m                kubelet          Node multinode-101100-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  75s (x2 over 75s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s (x2 over 75s)  kubelet          Node multinode-101100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s (x2 over 75s)  kubelet          Node multinode-101100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           73s                node-controller  Node multinode-101100-m03 event: Registered Node multinode-101100-m03 in Controller
	  Normal  NodeReady                70s                kubelet          Node multinode-101100-m03 status is now: NodeReady
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.692465] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.707713] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.789899] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.282690] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May14 00:16] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.158382] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[ +23.750429] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	[  +0.111929] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.464883] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +0.164872] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	[  +0.194348] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +2.832176] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.181315] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.160798] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	[  +0.238904] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	[  +0.787359] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	[  +0.085936] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.384697] systemd-fstab-generator[1513]: Ignoring "noauto" option for root device
	[  +1.802132] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.213940] kauditd_printk_skb: 10 callbacks suppressed
	[  +3.471694] systemd-fstab-generator[2315]: Ignoring "noauto" option for root device
	[May14 00:17] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [08450c853590] <==
	{"level":"info","ts":"2024-05-14T00:16:51.816877Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-14T00:16:51.816978Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-14T00:16:51.817493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f switched to configuration voters=(7947751373170489359)"}
	{"level":"info","ts":"2024-05-14T00:16:51.817687Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","added-peer-id":"6e4c15c3d0f3380f","added-peer-peer-urls":["https://172.23.106.39:2380"]}
	{"level":"info","ts":"2024-05-14T00:16:51.817911Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb849d1df0b559d7","local-member-id":"6e4c15c3d0f3380f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T00:16:51.818648Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T00:16:51.83299Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-14T00:16:51.834951Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6e4c15c3d0f3380f","initial-advertise-peer-urls":["https://172.23.102.122:2380"],"listen-peer-urls":["https://172.23.102.122:2380"],"advertise-client-urls":["https://172.23.102.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.102.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-14T00:16:51.835138Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-14T00:16:51.835469Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.102.122:2380"}
	{"level":"info","ts":"2024-05-14T00:16:51.835603Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.102.122:2380"}
	{"level":"info","ts":"2024-05-14T00:16:53.468953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-14T00:16:53.469136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-14T00:16:53.469191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgPreVoteResp from 6e4c15c3d0f3380f at term 2"}
	{"level":"info","ts":"2024-05-14T00:16:53.469216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became candidate at term 3"}
	{"level":"info","ts":"2024-05-14T00:16:53.469228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f received MsgVoteResp from 6e4c15c3d0f3380f at term 3"}
	{"level":"info","ts":"2024-05-14T00:16:53.469245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6e4c15c3d0f3380f became leader at term 3"}
	{"level":"info","ts":"2024-05-14T00:16:53.469259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6e4c15c3d0f3380f elected leader 6e4c15c3d0f3380f at term 3"}
	{"level":"info","ts":"2024-05-14T00:16:53.479025Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6e4c15c3d0f3380f","local-member-attributes":"{Name:multinode-101100 ClientURLs:[https://172.23.102.122:2379]}","request-path":"/0/members/6e4c15c3d0f3380f/attributes","cluster-id":"bb849d1df0b559d7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-14T00:16:53.479459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-14T00:16:53.479642Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-14T00:16:53.481317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-14T00:16:53.481353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-14T00:16:53.483334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.102.122:2379"}
	{"level":"info","ts":"2024-05-14T00:16:53.483616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:23:45 up 8 min,  0 users,  load average: 0.14, 0.23, 0.13
	Linux multinode-101100 5.10.207 #1 SMP Thu May 9 02:07:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2b424a7cd98c] <==
	I0514 00:22:59.515588       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.2.0/24] 
	I0514 00:23:09.528273       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:23:09.528308       1 main.go:227] handling current node
	I0514 00:23:09.528319       1 main.go:223] Handling node with IPs: map[172.23.97.128:{}]
	I0514 00:23:09.528325       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:23:09.528429       1 main.go:223] Handling node with IPs: map[172.23.111.37:{}]
	I0514 00:23:09.528451       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.2.0/24] 
	I0514 00:23:19.534960       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:23:19.534997       1 main.go:227] handling current node
	I0514 00:23:19.535007       1 main.go:223] Handling node with IPs: map[172.23.97.128:{}]
	I0514 00:23:19.535013       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:23:19.535279       1 main.go:223] Handling node with IPs: map[172.23.111.37:{}]
	I0514 00:23:19.535307       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.2.0/24] 
	I0514 00:23:29.547679       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:23:29.547798       1 main.go:227] handling current node
	I0514 00:23:29.547811       1 main.go:223] Handling node with IPs: map[172.23.97.128:{}]
	I0514 00:23:29.547818       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:23:29.548166       1 main.go:223] Handling node with IPs: map[172.23.111.37:{}]
	I0514 00:23:29.548253       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.2.0/24] 
	I0514 00:23:39.561800       1 main.go:223] Handling node with IPs: map[172.23.102.122:{}]
	I0514 00:23:39.561990       1 main.go:227] handling current node
	I0514 00:23:39.562036       1 main.go:223] Handling node with IPs: map[172.23.97.128:{}]
	I0514 00:23:39.562116       1 main.go:250] Node multinode-101100-m02 has CIDR [10.244.1.0/24] 
	I0514 00:23:39.562384       1 main.go:223] Handling node with IPs: map[172.23.111.37:{}]
	I0514 00:23:39.562492       1 main.go:250] Node multinode-101100-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [b7d8d9a5e5ea] <==
	I0514 00:16:57.751233       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0514 00:16:57.751585       1 main.go:107] hostIP = 172.23.102.122
	podIP = 172.23.102.122
	I0514 00:16:57.752181       1 main.go:116] setting mtu 1500 for CNI 
	I0514 00:16:57.752200       1 main.go:146] kindnetd IP family: "ipv4"
	I0514 00:16:57.752221       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0514 00:17:01.123977       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:17:04.195495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:17:07.267636       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:17:10.339619       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0514 00:17:13.411801       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [da9e6534cd87] <==
	I0514 00:16:54.938841       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 00:16:54.950730       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0514 00:16:54.950897       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0514 00:16:54.951294       1 aggregator.go:165] initial CRD sync complete...
	I0514 00:16:54.951545       1 autoregister_controller.go:141] Starting autoregister controller
	I0514 00:16:54.951793       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0514 00:16:54.951875       1 cache.go:39] Caches are synced for autoregister controller
	I0514 00:16:54.962299       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0514 00:16:54.968027       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 00:16:54.968302       1 policy_source.go:224] refreshing policies
	I0514 00:16:54.997391       1 shared_informer.go:320] Caches are synced for configmaps
	I0514 00:16:54.999391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 00:16:54.999732       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0514 00:16:54.999871       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0514 00:16:55.037244       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 00:16:55.824524       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0514 00:16:56.521956       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122 172.23.106.39]
	I0514 00:16:56.523614       1 controller.go:615] quota admission added evaluator for: endpoints
	I0514 00:16:56.536716       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 00:16:57.861026       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0514 00:16:58.068043       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0514 00:16:58.085925       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0514 00:16:58.189328       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 00:16:58.200849       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0514 00:17:16.528300       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.102.122]
	
	
	==> kube-controller-manager [b87239d1199a] <==
	I0514 00:18:01.608844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.702µs"
	I0514 00:18:01.651304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.008µs"
	I0514 00:18:01.710123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.783088ms"
	I0514 00:18:01.711762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.302µs"
	I0514 00:20:06.232732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.947276ms"
	I0514 00:20:06.232825       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.603µs"
	I0514 00:20:06.272284       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.569316ms"
	I0514 00:20:06.272367       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.402µs"
	I0514 00:20:19.847832       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m02\" does not exist"
	I0514 00:20:19.864793       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m02" podCIDRs=["10.244.1.0/24"]
	I0514 00:20:20.749261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.103µs"
	I0514 00:20:26.533952       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:20:26.568298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.103µs"
	I0514 00:20:34.823799       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.505µs"
	I0514 00:20:34.839919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.305µs"
	I0514 00:20:34.869792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="165.412µs"
	I0514 00:20:34.913147       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.103µs"
	I0514 00:20:34.918380       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.003µs"
	I0514 00:20:35.952839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.08245ms"
	I0514 00:20:35.953204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.105µs"
	I0514 00:22:24.786914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:22:30.376713       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:22:30.376939       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:22:30.415927       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.2.0/24"]
	I0514 00:22:35.343204       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	
	
	==> kube-controller-manager [e96f94398d6d] <==
	I0513 23:59:02.603699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m02"
	I0513 23:59:22.527169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0513 23:59:45.791856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.887671ms"
	I0513 23:59:45.808219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.096894ms"
	I0513 23:59:45.808747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.005µs"
	I0513 23:59:45.809833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.705µs"
	I0513 23:59:45.811263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.604µs"
	I0513 23:59:48.526617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.926472ms"
	I0513 23:59:48.529326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.302µs"
	I0513 23:59:48.555529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.972453ms"
	I0513 23:59:48.556317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.601µs"
	I0514 00:03:17.563212       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:03:17.565297       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:03:17.579900       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.2.0/24"]
	I0514 00:03:17.665892       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101100-m03"
	I0514 00:03:38.035898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:10:17.797191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:12:39.070271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:12:44.527915       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:12:44.528275       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101100-m03\" does not exist"
	I0514 00:12:44.543895       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101100-m03" podCIDRs=["10.244.3.0/24"]
	I0514 00:12:49.983419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:14:17.920991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101100-m02"
	I0514 00:14:33.013074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.740609ms"
	I0514 00:14:33.013918       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.506µs"
	
	
	==> kube-proxy [91edaaa00da2] <==
	I0513 23:56:24.901713       1 server_linux.go:69] "Using iptables proxy"
	I0513 23:56:24.929714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.106.39"]
	I0513 23:56:24.982680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0513 23:56:24.982795       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0513 23:56:24.982816       1 server_linux.go:165] "Using iptables Proxier"
	I0513 23:56:24.988669       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0513 23:56:24.989566       1 server.go:872] "Version info" version="v1.30.0"
	I0513 23:56:24.989671       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0513 23:56:24.992700       1 config.go:192] "Starting service config controller"
	I0513 23:56:24.993131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0513 23:56:24.993327       1 config.go:101] "Starting endpoint slice config controller"
	I0513 23:56:24.993339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0513 23:56:24.994714       1 config.go:319] "Starting node config controller"
	I0513 23:56:24.994744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0513 23:56:25.094420       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0513 23:56:25.094530       1 shared_informer.go:320] Caches are synced for service config
	I0513 23:56:25.094981       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b2a1b31cd7de] <==
	I0514 00:16:57.528613       1 server_linux.go:69] "Using iptables proxy"
	I0514 00:16:57.562847       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.102.122"]
	I0514 00:16:57.701301       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 00:16:57.701447       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 00:16:57.701476       1 server_linux.go:165] "Using iptables Proxier"
	I0514 00:16:57.708219       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 00:16:57.708800       1 server.go:872] "Version info" version="v1.30.0"
	I0514 00:16:57.708841       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:16:57.712422       1 config.go:192] "Starting service config controller"
	I0514 00:16:57.712733       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 00:16:57.712780       1 config.go:101] "Starting endpoint slice config controller"
	I0514 00:16:57.712824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 00:16:57.715339       1 config.go:319] "Starting node config controller"
	I0514 00:16:57.717651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 00:16:57.815732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0514 00:16:57.815811       1 shared_informer.go:320] Caches are synced for service config
	I0514 00:16:57.818050       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [964887fc5d36] <==
	E0513 23:56:07.344853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0513 23:56:07.410556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0513 23:56:07.410716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0513 23:56:07.423084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0513 23:56:07.423126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0513 23:56:07.467897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0513 23:56:07.467939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0513 23:56:07.484903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0513 23:56:07.485019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0513 23:56:07.545758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0513 23:56:07.546087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0513 23:56:07.573884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0513 23:56:07.573980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0513 23:56:07.633780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0513 23:56:07.633901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0513 23:56:07.680821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0513 23:56:07.680938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0513 23:56:07.704130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0513 23:56:07.704357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0513 23:56:07.736914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0513 23:56:07.737079       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0513 23:56:07.754367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0513 23:56:07.754798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0513 23:56:09.676327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0514 00:14:35.689344       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d3581c1c570c] <==
	I0514 00:16:52.716401       1 serving.go:380] Generated self-signed cert in-memory
	W0514 00:16:54.858727       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0514 00:16:54.858778       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0514 00:16:54.858790       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0514 00:16:54.858800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 00:16:54.945438       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 00:16:54.945867       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:16:54.953986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 00:16:54.957180       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 00:16:54.957284       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 00:16:54.957493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 00:16:55.058381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 14 00:18:49 multinode-101100 kubelet[1520]: E0514 00:18:49.924631    1520 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:18:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:18:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:18:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:18:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 14 00:19:49 multinode-101100 kubelet[1520]: E0514 00:19:49.922932    1520 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:19:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:19:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:19:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:19:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 14 00:20:49 multinode-101100 kubelet[1520]: E0514 00:20:49.922147    1520 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:20:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:20:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:20:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:20:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 14 00:21:49 multinode-101100 kubelet[1520]: E0514 00:21:49.922718    1520 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:21:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:21:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:21:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:21:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 14 00:22:49 multinode-101100 kubelet[1520]: E0514 00:22:49.927158    1520 iptables.go:577] "Could not set up iptables canary" err=<
	May 14 00:22:49 multinode-101100 kubelet[1520]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 14 00:22:49 multinode-101100 kubelet[1520]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 14 00:22:49 multinode-101100 kubelet[1520]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 14 00:22:49 multinode-101100 kubelet[1520]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:23:34.575828    8832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-101100 -n multinode-101100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-101100 -n multinode-101100: (10.7075564s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-101100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (46.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-650500 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-650500 --driver=hyperv: exit status 1 (4m59.716445s)

                                                
                                                
-- stdout --
	* [NoKubernetes-650500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-650500" primary control-plane node in "NoKubernetes-650500" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:39:04.941727    7648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-650500 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-650500 -n NoKubernetes-650500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-650500 -n NoKubernetes-650500: exit status 7 (194.9768ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:44:04.660056   10700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-650500" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (104.82s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-851700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p pause-851700 --alsologtostderr -v=5: exit status 1 (12.0790617s)

                                                
                                                
-- stdout --
	* Stopping node "pause-851700"  ...
	* Powering off "pause-851700" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 01:12:53.104372   13412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0514 01:12:53.168377   13412 out.go:291] Setting OutFile to fd 860 ...
	I0514 01:12:53.168377   13412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 01:12:53.168377   13412 out.go:304] Setting ErrFile to fd 912...
	I0514 01:12:53.168377   13412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 01:12:53.181379   13412 out.go:298] Setting JSON to false
	I0514 01:12:53.188379   13412 cli_runner.go:164] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
	I0514 01:12:53.370138   13412 config.go:182] Loaded profile config "auto-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:53.370788   13412 config.go:182] Loaded profile config "calico-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:53.371182   13412 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:53.371556   13412 config.go:182] Loaded profile config "kindnet-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:53.371556   13412 config.go:182] Loaded profile config "pause-851700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:53.372205   13412 config.go:182] Loaded profile config "pause-851700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:53.372263   13412 delete.go:301] DeleteProfiles
	I0514 01:12:53.372263   13412 delete.go:329] Deleting pause-851700
	I0514 01:12:53.372263   13412 delete.go:334] pause-851700 configuration: &{Name:pause-851700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-851700 Namespace:defau
lt APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.111.154 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 01:12:53.372826   13412 config.go:182] Loaded profile config "pause-851700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:53.373139   13412 config.go:182] Loaded profile config "pause-851700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:53.374519   13412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:12:55.698986   13412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:55.698986   13412 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:55.699057   13412 stop.go:39] StopHost: pause-851700
	I0514 01:12:55.702450   13412 out.go:177] * Stopping node "pause-851700"  ...
	I0514 01:12:55.704387   13412 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0514 01:12:55.714504   13412 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0514 01:12:55.714836   13412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:12:57.992791   13412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:57.993811   13412 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:57.993867   13412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:00.734326   13412 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:13:00.734566   13412 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:00.735018   13412 sshutil.go:53] new ssh client: &{IP:172.23.111.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-851700\id_rsa Username:docker}
	I0514 01:13:00.855771   13412 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (5.1408415s)
	I0514 01:13:00.864766   13412 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0514 01:13:00.941035   13412 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0514 01:13:01.005705   13412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:13:03.301946   13412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:03.301946   13412 main.go:141] libmachine: [stderr =====>] : 
	W0514 01:13:03.302867   13412 register.go:133] "PowerOff" was not found within the registered steps for "Deleting": [Deleting Stopping Done Puring home dir]
	I0514 01:13:03.307859   13412 out.go:177] * Powering off "pause-851700" via SSH ...
	I0514 01:13:03.310383   13412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state

                                                
                                                
** /stderr **
pause_test.go:134: failed to delete minikube with args: "out/minikube-windows-amd64.exe delete -p pause-851700 --alsologtostderr -v=5" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-851700 -n pause-851700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-851700 -n pause-851700: exit status 2 (13.0129361s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 01:13:05.196807    4556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/DeletePaused FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/DeletePaused]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-851700 logs -n 25
E0514 01:13:33.237210    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-851700 logs -n 25: (19.1035886s)
helpers_test.go:252: TestPause/serial/DeletePaused logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p auto-204600 sudo                                  | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:11 UTC | 14 May 24 01:11 UTC |
	|         | iptables-save                                        |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo iptables                         | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:11 UTC | 14 May 24 01:11 UTC |
	|         | -t nat -L -n -v                                      |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:11 UTC | 14 May 24 01:11 UTC |
	|         | status kubelet --all --full                          |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:11 UTC | 14 May 24 01:11 UTC |
	|         | cat kubelet --no-pager                               |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 pgrep -a                           | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:11 UTC | 14 May 24 01:11 UTC |
	|         | kubelet                                              |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo journalctl                       | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:11 UTC | 14 May 24 01:11 UTC |
	|         | -xeu kubelet --all --full                            |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo cat                              | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:11 UTC | 14 May 24 01:12 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo cat                              | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | status docker --all --full                           |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| pause   | -p pause-851700                                      | pause-851700   | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | --alsologtostderr -v=5                               |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo cat                           | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | /etc/nsswitch.conf                                   |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | cat docker --no-pager                                |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo cat                           | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | /etc/hosts                                           |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo cat                              | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | /etc/docker/daemon.json                              |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo cat                           | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | /etc/resolv.conf                                     |                |                   |         |                     |                     |
	| unpause | -p pause-851700                                      | pause-851700   | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | --alsologtostderr -v=5                               |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo docker                           | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | system info                                          |                |                   |         |                     |                     |
	| pause   | -p pause-851700                                      | pause-851700   | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | --alsologtostderr -v=5                               |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo crictl                        | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | pods                                                 |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:13 UTC |
	|         | status cri-docker --all --full                       |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| delete  | -p pause-851700                                      | pause-851700   | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC |                     |
	|         | --alsologtostderr -v=5                               |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo crictl                        | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:13 UTC |
	|         | ps --all                                             |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	|         | cat cri-docker --no-pager                            |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo find                          | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo cat                              | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |                   |         |                     |                     |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/14 01:07:49
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0514 01:07:49.618496     744 out.go:291] Setting OutFile to fd 1924 ...
	I0514 01:07:49.618919     744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 01:07:49.618919     744 out.go:304] Setting ErrFile to fd 1928...
	I0514 01:07:49.618919     744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 01:07:49.639652     744 out.go:298] Setting JSON to false
	I0514 01:07:49.640994     744 start.go:129] hostinfo: {"hostname":"minikube5","uptime":10432,"bootTime":1715638436,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0514 01:07:49.642544     744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0514 01:07:49.648261     744 out.go:177] * [calico-204600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0514 01:07:49.654614     744 notify.go:220] Checking for updates...
	I0514 01:07:49.657042     744 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 01:07:49.658721     744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0514 01:07:49.661709     744 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0514 01:07:49.664771     744 out.go:177]   - MINIKUBE_LOCATION=18872
	I0514 01:07:49.667354     744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0514 01:07:49.645850    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 01:07:49.645928    8788 machine.go:97] duration metric: took 43.8982762s to provisionDockerMachine
	I0514 01:07:49.645985    8788 client.go:171] duration metric: took 1m51.2796852s to LocalClient.Create
	I0514 01:07:49.645985    8788 start.go:167] duration metric: took 1m51.2799693s to libmachine.API.Create "auto-204600"
	I0514 01:07:49.645985    8788 start.go:293] postStartSetup for "auto-204600" (driver="hyperv")
	I0514 01:07:49.646056    8788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 01:07:49.655875    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 01:07:49.655875    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:07:51.706080    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:07:51.706080    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:07:51.706080    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:07:49.671009     744 config.go:182] Loaded profile config "auto-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:07:49.671249     744 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:07:49.671835     744 config.go:182] Loaded profile config "kindnet-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:07:49.671835     744 config.go:182] Loaded profile config "pause-851700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:07:49.671835     744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0514 01:07:54.571385     744 out.go:177] * Using the hyperv driver based on user configuration
	I0514 01:07:54.574935     744 start.go:297] selected driver: hyperv
	I0514 01:07:54.574935     744 start.go:901] validating driver "hyperv" against <nil>
	I0514 01:07:54.574935     744 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0514 01:07:54.618525     744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0514 01:07:54.620974     744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 01:07:54.622571     744 cni.go:84] Creating CNI manager for "calico"
	I0514 01:07:54.622571     744 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0514 01:07:54.622669     744 start.go:340] cluster config:
	{Name:calico-204600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-204600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 01:07:54.622669     744 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0514 01:07:54.626867     744 out.go:177] * Starting "calico-204600" primary control-plane node in "calico-204600" cluster
	I0514 01:07:54.034463    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:07:54.045614    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:07:54.046027    8788 sshutil.go:53] new ssh client: &{IP:172.23.105.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\auto-204600\id_rsa Username:docker}
	I0514 01:07:54.150047    8788 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4937853s)
	I0514 01:07:54.159066    8788 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 01:07:54.166076    8788 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 01:07:54.166076    8788 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 01:07:54.166571    8788 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 01:07:54.167546    8788 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 01:07:54.175823    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 01:07:54.195244    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 01:07:54.233081    8788 start.go:296] duration metric: took 4.5867968s for postStartSetup
	I0514 01:07:54.240831    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:07:56.134653    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:07:56.134653    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:07:56.144599    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:07:54.629186     744 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 01:07:54.629351     744 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0514 01:07:54.629351     744 cache.go:56] Caching tarball of preloaded images
	I0514 01:07:54.629583     744 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0514 01:07:54.629827     744 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0514 01:07:54.630096     744 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\calico-204600\config.json ...
	I0514 01:07:54.630399     744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\calico-204600\config.json: {Name:mk9b077adce043a6c2bfbde82ee25c30e0afb8f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:07:54.633507     744 start.go:360] acquireMachinesLock for calico-204600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0514 01:07:58.371872    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:07:58.371872    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:07:58.382144    8788 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\config.json ...
	I0514 01:07:58.384623    8788 start.go:128] duration metric: took 2m0.0218819s to createHost
	I0514 01:07:58.384723    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:08:00.202313    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:00.202313    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:00.202313    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:02.487710    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:08:02.497653    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:02.501704    8788 main.go:141] libmachine: Using SSH client type: native
	I0514 01:08:02.502097    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.105.126 22 <nil> <nil>}
	I0514 01:08:02.502097    8788 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 01:08:02.638388    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715648882.877798917
	
	I0514 01:08:02.638489    8788 fix.go:216] guest clock: 1715648882.877798917
	I0514 01:08:02.638489    8788 fix.go:229] Guest: 2024-05-14 01:08:02.877798917 +0000 UTC Remote: 2024-05-14 01:07:58.3846721 +0000 UTC m=+391.718110301 (delta=4.493126817s)
	I0514 01:08:02.638573    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:08:04.527319    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:04.527319    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:04.527629    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:06.735640    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:08:06.735640    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:06.749065    8788 main.go:141] libmachine: Using SSH client type: native
	I0514 01:08:06.749365    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.105.126 22 <nil> <nil>}
	I0514 01:08:06.749365    8788 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715648882
	I0514 01:08:06.910963    7260 start.go:364] duration metric: took 4m31.8000076s to acquireMachinesLock for "kindnet-204600"
	I0514 01:08:06.911583    7260 start.go:93] Provisioning new machine with config: &{Name:kindnet-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-204600 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 01:08:06.911884    7260 start.go:125] createHost starting for "" (driver="hyperv")
	I0514 01:08:06.915285    7260 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0514 01:08:06.915980    7260 start.go:159] libmachine.API.Create for "kindnet-204600" (driver="hyperv")
	I0514 01:08:06.915980    7260 client.go:168] LocalClient.Create starting
	I0514 01:08:06.916682    7260 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0514 01:08:06.917039    7260 main.go:141] libmachine: Decoding PEM data...
	I0514 01:08:06.917209    7260 main.go:141] libmachine: Parsing certificate...
	I0514 01:08:06.917386    7260 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0514 01:08:06.917730    7260 main.go:141] libmachine: Decoding PEM data...
	I0514 01:08:06.917730    7260 main.go:141] libmachine: Parsing certificate...
	I0514 01:08:06.917922    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0514 01:08:08.628285    7260 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0514 01:08:08.628285    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:08.637447    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0514 01:08:06.905877    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 01:08:02 UTC 2024
	
	I0514 01:08:06.905877    8788 fix.go:236] clock set: Tue May 14 01:08:02 UTC 2024
	 (err=<nil>)
	I0514 01:08:06.905877    8788 start.go:83] releasing machines lock for "auto-204600", held for 2m8.5431984s
	I0514 01:08:06.905877    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:08:08.865460    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:08.865460    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:08.865460    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:11.168159    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:08:11.178846    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:11.181882    8788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 01:08:11.182042    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:08:11.190743    8788 ssh_runner.go:195] Run: cat /version.json
	I0514 01:08:11.190743    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:08:10.181233    7260 main.go:141] libmachine: [stdout =====>] : False
	
	I0514 01:08:10.181233    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:10.188665    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0514 01:08:11.578248    7260 main.go:141] libmachine: [stdout =====>] : True
	
	I0514 01:08:11.587420    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:11.587420    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0514 01:08:15.004684    7260 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0514 01:08:15.011759    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:15.013283    7260 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0514 01:08:13.205346    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:13.205346    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:13.205346    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:13.217689    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:13.217689    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:13.217689    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:15.589619    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:08:15.589668    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:15.589997    8788 sshutil.go:53] new ssh client: &{IP:172.23.105.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\auto-204600\id_rsa Username:docker}
	I0514 01:08:15.620870    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:08:15.620870    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:15.621424    8788 sshutil.go:53] new ssh client: &{IP:172.23.105.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\auto-204600\id_rsa Username:docker}
	I0514 01:08:15.701630    8788 ssh_runner.go:235] Completed: cat /version.json: (4.5105912s)
	I0514 01:08:15.710729    8788 ssh_runner.go:195] Run: systemctl --version
	I0514 01:08:15.819151    8788 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6369652s)
	I0514 01:08:15.829153    8788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0514 01:08:15.837359    8788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 01:08:15.845540    8788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 01:08:15.865712    8788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 01:08:15.865712    8788 start.go:494] detecting cgroup driver to use...
	I0514 01:08:15.865712    8788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:08:15.912444    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 01:08:15.946560    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 01:08:15.963872    8788 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 01:08:15.976047    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 01:08:16.004158    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:08:16.031757    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 01:08:16.061787    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:08:16.091479    8788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 01:08:16.121479    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 01:08:16.148271    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 01:08:16.175581    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 01:08:16.210520    8788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 01:08:16.236467    8788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 01:08:16.265116    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:16.469097    8788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 01:08:16.496045    8788 start.go:494] detecting cgroup driver to use...
	I0514 01:08:16.508051    8788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 01:08:16.541208    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:08:16.571657    8788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 01:08:16.608680    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:08:16.637300    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:08:16.668698    8788 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 01:08:15.347641    7260 main.go:141] libmachine: Creating SSH key...
	I0514 01:08:15.606054    7260 main.go:141] libmachine: Creating VM...
	I0514 01:08:15.606054    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0514 01:08:18.333075    7260 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0514 01:08:18.343960    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:18.344046    7260 main.go:141] libmachine: Using switch "Default Switch"
	I0514 01:08:18.344046    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0514 01:08:19.855049    7260 main.go:141] libmachine: [stdout =====>] : True
	
	I0514 01:08:19.861088    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:19.861088    7260 main.go:141] libmachine: Creating VHD
	I0514 01:08:19.861179    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0514 01:08:16.891651    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:08:16.915168    8788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:08:16.964137    8788 ssh_runner.go:195] Run: which cri-dockerd
	I0514 01:08:16.981372    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 01:08:16.999131    8788 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 01:08:17.038646    8788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 01:08:17.217684    8788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 01:08:17.401521    8788 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 01:08:17.406479    8788 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 01:08:17.445894    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:17.627472    8788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:08:20.204247    8788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5766063s)
	I0514 01:08:20.218253    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 01:08:20.249738    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:08:20.281146    8788 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 01:08:20.484482    8788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 01:08:20.657538    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:20.833439    8788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 01:08:20.870925    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:08:20.903187    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:21.067025    8788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 01:08:21.162477    8788 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 01:08:21.170751    8788 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 01:08:21.178554    8788 start.go:562] Will wait 60s for crictl version
	I0514 01:08:21.187763    8788 ssh_runner.go:195] Run: which crictl
	I0514 01:08:21.202873    8788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 01:08:21.250212    8788 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 01:08:21.257222    8788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:08:21.291154    8788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:08:21.329000    8788 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 01:08:21.329000    8788 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 01:08:21.334622    8788 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 01:08:21.334622    8788 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 01:08:21.335141    8788 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 01:08:21.335141    8788 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 01:08:21.339161    8788 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 01:08:21.339161    8788 ip.go:210] interface addr: 172.23.96.1/20
	I0514 01:08:21.353281    8788 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 01:08:21.361026    8788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 01:08:21.386117    8788 kubeadm.go:877] updating cluster {Name:auto-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-204600 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.105.126 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0514 01:08:21.386371    8788 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 01:08:21.393236    8788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:08:21.411868    8788 docker.go:685] Got preloaded images: 
	I0514 01:08:21.411868    8788 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0514 01:08:21.420265    8788 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0514 01:08:21.444324    8788 ssh_runner.go:195] Run: which lz4
	I0514 01:08:21.458487    8788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0514 01:08:21.461726    8788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0514 01:08:21.466299    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0514 01:08:23.611767    7260 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 25982223-F9E9-4063-867D-C430D140FBC7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0514 01:08:23.611866    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:23.612088    7260 main.go:141] libmachine: Writing magic tar header
	I0514 01:08:23.612209    7260 main.go:141] libmachine: Writing SSH key tar header
	I0514 01:08:23.619962    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0514 01:08:23.566622    8788 docker.go:649] duration metric: took 2.11557s to copy over tarball
	I0514 01:08:23.575437    8788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0514 01:08:26.592355    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:26.594234    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:26.594234    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\disk.vhd' -SizeBytes 20000MB
	I0514 01:08:29.134897    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:29.134897    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:29.134897    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kindnet-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0514 01:08:34.112411    7260 main.go:141] libmachine: [stdout =====>] : 
	Name           State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----           ----- ----------- ----------------- ------   ------             -------
	kindnet-204600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0514 01:08:34.122356    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:34.122356    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kindnet-204600 -DynamicMemoryEnabled $false
	I0514 01:08:32.263814    8788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6878054s)
	I0514 01:08:32.263954    8788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0514 01:08:32.323225    8788 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0514 01:08:32.342135    8788 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0514 01:08:32.382630    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:32.554409    8788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:08:36.671839    8788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.1171596s)
	I0514 01:08:36.679580    8788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:08:36.701458    8788 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0514 01:08:36.701512    8788 cache_images.go:84] Images are preloaded, skipping loading
	I0514 01:08:36.701512    8788 kubeadm.go:928] updating node { 172.23.105.126 8443 v1.30.0 docker true true} ...
	I0514 01:08:36.701731    8788 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-204600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.105.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:auto-204600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0514 01:08:36.708866    8788 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0514 01:08:36.739696    8788 cni.go:84] Creating CNI manager for ""
	I0514 01:08:36.739790    8788 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0514 01:08:36.739790    8788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0514 01:08:36.739883    8788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.105.126 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-204600 NodeName:auto-204600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.105.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.105.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0514 01:08:36.740116    8788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.105.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "auto-204600"
	  kubeletExtraArgs:
	    node-ip: 172.23.105.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.105.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0514 01:08:36.748607    8788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 01:08:36.766534    8788 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 01:08:36.774632    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0514 01:08:36.383264    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:36.383339    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:36.383339    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kindnet-204600 -Count 2
	I0514 01:08:38.359132    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:38.359132    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:38.365301    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kindnet-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\boot2docker.iso'
	I0514 01:08:36.798822    8788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0514 01:08:36.828397    8788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 01:08:36.859035    8788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0514 01:08:36.896074    8788 ssh_runner.go:195] Run: grep 172.23.105.126	control-plane.minikube.internal$ /etc/hosts
	I0514 01:08:36.902460    8788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.105.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 01:08:36.935223    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:37.110923    8788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:08:37.137245    8788 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600 for IP: 172.23.105.126
	I0514 01:08:37.137361    8788 certs.go:194] generating shared ca certs ...
	I0514 01:08:37.137416    8788 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:37.137667    8788 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 01:08:37.138372    8788 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 01:08:37.138486    8788 certs.go:256] generating profile certs ...
	I0514 01:08:37.139052    8788 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.key
	I0514 01:08:37.139157    8788 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.crt with IP's: []
	I0514 01:08:37.924049    8788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.crt ...
	I0514 01:08:37.924049    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.crt: {Name:mk9ef5d9715996082b511c57d50d77171fe15bed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:37.925469    8788 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.key ...
	I0514 01:08:37.925469    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.key: {Name:mk9a7abc7b9c802b982e8bcc449e03d42ee8f776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:37.926467    8788 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key.656d5658
	I0514 01:08:37.926467    8788 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt.656d5658 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.105.126]
	I0514 01:08:38.121280    8788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt.656d5658 ...
	I0514 01:08:38.121280    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt.656d5658: {Name:mkad59b02e5ab02952d566053a90503e0d1fceb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:38.127775    8788 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key.656d5658 ...
	I0514 01:08:38.127775    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key.656d5658: {Name:mkc7dde2a9da89392ad4bc1cf9f8482373a0b003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:38.128675    8788 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt.656d5658 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt
	I0514 01:08:38.139896    8788 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key.656d5658 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key
	I0514 01:08:38.140690    8788 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.key
	I0514 01:08:38.140690    8788 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.crt with IP's: []
	I0514 01:08:38.554131    8788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.crt ...
	I0514 01:08:38.554131    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.crt: {Name:mkd35f616dea7103668518ae7470f3b9a667195f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:38.558853    8788 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.key ...
	I0514 01:08:38.558853    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.key: {Name:mk68c5c6e7c47ca76eebda32f86e1aedfe9ed236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:38.564486    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 01:08:38.570776    8788 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 01:08:38.570776    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 01:08:38.571095    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 01:08:38.571291    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 01:08:38.571430    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 01:08:38.571621    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 01:08:38.571908    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 01:08:38.617461    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 01:08:38.654280    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 01:08:38.698922    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 01:08:38.740951    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0514 01:08:38.793866    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0514 01:08:38.836904    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0514 01:08:38.882147    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0514 01:08:38.928818    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 01:08:38.968961    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 01:08:39.011875    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 01:08:39.050865    8788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0514 01:08:39.087667    8788 ssh_runner.go:195] Run: openssl version
	I0514 01:08:39.105178    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 01:08:39.132526    8788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 01:08:39.141172    8788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 01:08:39.149689    8788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 01:08:39.166333    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 01:08:39.190714    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 01:08:39.217495    8788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 01:08:39.223949    8788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 01:08:39.232351    8788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 01:08:39.249253    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 01:08:39.276473    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 01:08:39.306414    8788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:08:39.315192    8788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:08:39.328716    8788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:08:39.352838    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 01:08:39.387566    8788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 01:08:39.397506    8788 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 01:08:39.397506    8788 kubeadm.go:391] StartCluster: {Name:auto-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-204600 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.105.126 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 01:08:39.407768    8788 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 01:08:39.443378    8788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0514 01:08:39.471099    8788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0514 01:08:39.497575    8788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0514 01:08:39.513628    8788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0514 01:08:39.513628    8788 kubeadm.go:156] found existing configuration files:
	
	I0514 01:08:39.522742    8788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0514 01:08:39.538684    8788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0514 01:08:39.551320    8788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0514 01:08:39.575552    8788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0514 01:08:39.592185    8788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0514 01:08:39.602308    8788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0514 01:08:39.626150    8788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0514 01:08:39.627706    8788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0514 01:08:39.651039    8788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0514 01:08:39.678769    8788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0514 01:08:39.694320    8788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0514 01:08:39.705138    8788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0514 01:08:39.718508    8788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0514 01:08:39.927579    8788 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0514 01:08:40.736240    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:40.736240    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:40.736240    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kindnet-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\disk.vhd'
	I0514 01:08:43.076921    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:43.076921    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:43.076921    7260 main.go:141] libmachine: Starting VM...
	I0514 01:08:43.077120    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kindnet-204600
	I0514 01:08:45.993214    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:45.993214    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:45.993214    7260 main.go:141] libmachine: Waiting for host to start...
	I0514 01:08:45.993272    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:08:48.011294    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:48.011294    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:48.016627    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:53.170813    8788 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0514 01:08:53.170935    8788 kubeadm.go:309] [preflight] Running pre-flight checks
	I0514 01:08:53.171202    8788 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0514 01:08:53.171627    8788 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0514 01:08:53.172034    8788 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0514 01:08:53.172185    8788 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0514 01:08:53.174566    8788 out.go:204]   - Generating certificates and keys ...
	I0514 01:08:53.174858    8788 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0514 01:08:53.174975    8788 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0514 01:08:53.175198    8788 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0514 01:08:53.175311    8788 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0514 01:08:53.175425    8788 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0514 01:08:53.175683    8788 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0514 01:08:53.175873    8788 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0514 01:08:53.176321    8788 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [auto-204600 localhost] and IPs [172.23.105.126 127.0.0.1 ::1]
	I0514 01:08:53.176513    8788 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0514 01:08:53.177000    8788 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [auto-204600 localhost] and IPs [172.23.105.126 127.0.0.1 ::1]
	I0514 01:08:53.177360    8788 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0514 01:08:53.177583    8788 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0514 01:08:53.177771    8788 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0514 01:08:53.177961    8788 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0514 01:08:53.177961    8788 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0514 01:08:53.177961    8788 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0514 01:08:53.177961    8788 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0514 01:08:53.178507    8788 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0514 01:08:53.178812    8788 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0514 01:08:53.178986    8788 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0514 01:08:53.179102    8788 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0514 01:08:53.182052    8788 out.go:204]   - Booting up control plane ...
	I0514 01:08:53.182758    8788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0514 01:08:53.182758    8788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0514 01:08:53.182758    8788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0514 01:08:53.183477    8788 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0514 01:08:53.183477    8788 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0514 01:08:53.183477    8788 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0514 01:08:53.183477    8788 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0514 01:08:53.184172    8788 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0514 01:08:53.184331    8788 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001652467s
	I0514 01:08:53.184605    8788 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0514 01:08:53.184811    8788 kubeadm.go:309] [api-check] The API server is healthy after 7.002502319s
	I0514 01:08:53.185078    8788 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0514 01:08:53.185735    8788 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0514 01:08:53.185973    8788 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0514 01:08:53.186622    8788 kubeadm.go:309] [mark-control-plane] Marking the node auto-204600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0514 01:08:53.186742    8788 kubeadm.go:309] [bootstrap-token] Using token: t479qx.5zv0wf6iyoa52qxl
	I0514 01:08:53.189906    8788 out.go:204]   - Configuring RBAC rules ...
	I0514 01:08:53.190083    8788 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0514 01:08:53.190369    8788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0514 01:08:53.190639    8788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0514 01:08:53.190639    8788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0514 01:08:53.191286    8788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0514 01:08:53.191534    8788 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0514 01:08:53.191771    8788 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0514 01:08:53.191835    8788 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0514 01:08:53.191956    8788 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0514 01:08:53.192015    8788 kubeadm.go:309] 
	I0514 01:08:53.192130    8788 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0514 01:08:53.192130    8788 kubeadm.go:309] 
	I0514 01:08:53.192424    8788 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0514 01:08:53.192476    8788 kubeadm.go:309] 
	I0514 01:08:53.192627    8788 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0514 01:08:53.192746    8788 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0514 01:08:53.192866    8788 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0514 01:08:53.192866    8788 kubeadm.go:309] 
	I0514 01:08:53.193050    8788 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0514 01:08:53.193105    8788 kubeadm.go:309] 
	I0514 01:08:53.193210    8788 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0514 01:08:53.193210    8788 kubeadm.go:309] 
	I0514 01:08:53.193210    8788 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0514 01:08:53.193210    8788 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0514 01:08:53.193815    8788 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0514 01:08:53.193869    8788 kubeadm.go:309] 
	I0514 01:08:53.194186    8788 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0514 01:08:53.194223    8788 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0514 01:08:53.194223    8788 kubeadm.go:309] 
	I0514 01:08:53.194223    8788 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token t479qx.5zv0wf6iyoa52qxl \
	I0514 01:08:53.194223    8788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 \
	I0514 01:08:53.194826    8788 kubeadm.go:309] 	--control-plane 
	I0514 01:08:53.194826    8788 kubeadm.go:309] 
	I0514 01:08:53.195017    8788 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0514 01:08:53.195078    8788 kubeadm.go:309] 
	I0514 01:08:53.195323    8788 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token t479qx.5zv0wf6iyoa52qxl \
	I0514 01:08:53.195719    8788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0514 01:08:53.195810    8788 cni.go:84] Creating CNI manager for ""
	I0514 01:08:53.195810    8788 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0514 01:08:53.199008    8788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0514 01:08:50.253085    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:50.254766    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:51.267891    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:08:53.254397    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:53.254397    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:53.254844    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:53.212825    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0514 01:08:53.232601    8788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0514 01:08:53.274849    8788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0514 01:08:53.284999    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:53.284999    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-204600 minikube.k8s.io/updated_at=2024_05_14T01_08_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=auto-204600 minikube.k8s.io/primary=true
	I0514 01:08:53.293072    8788 ops.go:34] apiserver oom_adj: -16
	I0514 01:08:53.457800    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:53.968848    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:54.461491    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:54.961365    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:55.472619    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:55.964402    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:56.461053    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:55.506107    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:55.506147    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:56.508389    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:08:58.439526    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:58.439758    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:58.439794    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:56.962127    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:57.458836    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:57.971407    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:58.478671    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:58.967812    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:59.458163    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:59.958874    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:00.456624    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:00.959976    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:01.470847    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:00.653440    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:09:00.653440    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:01.671620    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:03.624091    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:03.633990    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:03.633990    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:01.960988    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:02.472805    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:02.965929    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:03.459702    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:03.970452    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:04.458987    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:04.969068    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:05.466448    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:05.975441    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:06.073448    8788 kubeadm.go:1107] duration metric: took 12.7976299s to wait for elevateKubeSystemPrivileges
	W0514 01:09:06.073570    8788 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0514 01:09:06.073631    8788 kubeadm.go:393] duration metric: took 26.6743596s to StartCluster
	I0514 01:09:06.073690    8788 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:09:06.073811    8788 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 01:09:06.075729    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:09:06.076730    8788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0514 01:09:06.076843    8788 start.go:234] Will wait 15m0s for node &{Name: IP:172.23.105.126 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 01:09:06.076843    8788 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0514 01:09:06.076954    8788 addons.go:69] Setting storage-provisioner=true in profile "auto-204600"
	I0514 01:09:06.076954    8788 addons.go:234] Setting addon storage-provisioner=true in "auto-204600"
	I0514 01:09:06.076954    8788 addons.go:69] Setting default-storageclass=true in profile "auto-204600"
	I0514 01:09:06.077061    8788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-204600"
	I0514 01:09:06.083105    8788 out.go:177] * Verifying Kubernetes components...
	I0514 01:09:06.077061    8788 host.go:66] Checking if "auto-204600" exists ...
	I0514 01:09:06.077061    8788 config.go:182] Loaded profile config "auto-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:09:06.077969    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:09:06.084146    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:09:06.097476    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:09:06.303319    8788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0514 01:09:06.509019    8788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:09:07.053447    8788 start.go:946] {"host.minikube.internal": 172.23.96.1} host record injected into CoreDNS's ConfigMap
	I0514 01:09:07.063629    8788 node_ready.go:35] waiting up to 15m0s for node "auto-204600" to be "Ready" ...
	I0514 01:09:07.102243    8788 node_ready.go:49] node "auto-204600" has status "Ready":"True"
	I0514 01:09:07.102243    8788 node_ready.go:38] duration metric: took 38.6111ms for node "auto-204600" to be "Ready" ...
	I0514 01:09:07.102243    8788 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:09:07.118192    8788 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-9ssxc" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:07.569831    8788 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-204600" context rescaled to 1 replicas
	I0514 01:09:08.381131    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:08.381131    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:08.383346    8788 addons.go:234] Setting addon default-storageclass=true in "auto-204600"
	I0514 01:09:08.383346    8788 host.go:66] Checking if "auto-204600" exists ...
	I0514 01:09:08.384732    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:09:08.399151    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:08.400170    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:08.404161    8788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0514 01:09:05.889990    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:09:05.897604    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:06.912673    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:09.178526    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:09.180223    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:09.180316    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:08.407765    8788 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0514 01:09:08.407765    8788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0514 01:09:08.407765    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:09:09.146184    8788 pod_ready.go:102] pod "coredns-7db6d8ff4d-9ssxc" in "kube-system" namespace has status "Ready":"False"
	I0514 01:09:10.632402    8788 pod_ready.go:92] pod "coredns-7db6d8ff4d-9ssxc" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:10.632402    8788 pod_ready.go:81] duration metric: took 3.5133842s for pod "coredns-7db6d8ff4d-9ssxc" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.632402    8788 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-rdwpl" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.633213    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:10.633213    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:10.635299    8788 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0514 01:09:10.635299    8788 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0514 01:09:10.635376    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:09:10.635852    8788 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-rdwpl" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-rdwpl" not found
	I0514 01:09:10.635914    8788 pod_ready.go:81] duration metric: took 3.5117ms for pod "coredns-7db6d8ff4d-rdwpl" in "kube-system" namespace to be "Ready" ...
	E0514 01:09:10.635914    8788 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-rdwpl" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-rdwpl" not found
	I0514 01:09:10.635975    8788 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.639943    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:10.640005    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:10.640070    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:10.651589    8788 pod_ready.go:92] pod "etcd-auto-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:10.651690    8788 pod_ready.go:81] duration metric: took 15.6627ms for pod "etcd-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.651690    8788 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.662569    8788 pod_ready.go:92] pod "kube-apiserver-auto-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:10.662630    8788 pod_ready.go:81] duration metric: took 10.8859ms for pod "kube-apiserver-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.662630    8788 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.671444    8788 pod_ready.go:92] pod "kube-controller-manager-auto-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:10.671444    8788 pod_ready.go:81] duration metric: took 8.8137ms for pod "kube-controller-manager-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.671444    8788 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-lmjhb" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.836020    8788 pod_ready.go:92] pod "kube-proxy-lmjhb" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:10.836095    8788 pod_ready.go:81] duration metric: took 164.64ms for pod "kube-proxy-lmjhb" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.836095    8788 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:11.241493    8788 pod_ready.go:92] pod "kube-scheduler-auto-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:11.241493    8788 pod_ready.go:81] duration metric: took 405.3153ms for pod "kube-scheduler-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:11.241493    8788 pod_ready.go:38] duration metric: took 4.138976s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:09:11.241493    8788 api_server.go:52] waiting for apiserver process to appear ...
	I0514 01:09:11.253374    8788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:09:11.278079    8788 api_server.go:72] duration metric: took 5.2007802s to wait for apiserver process to appear ...
	I0514 01:09:11.278144    8788 api_server.go:88] waiting for apiserver healthz status ...
	I0514 01:09:11.278144    8788 api_server.go:253] Checking apiserver healthz at https://172.23.105.126:8443/healthz ...
	I0514 01:09:11.283913    8788 api_server.go:279] https://172.23.105.126:8443/healthz returned 200:
	ok
	I0514 01:09:11.287411    8788 api_server.go:141] control plane version: v1.30.0
	I0514 01:09:11.287493    8788 api_server.go:131] duration metric: took 9.2657ms to wait for apiserver health ...
	I0514 01:09:11.287493    8788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 01:09:11.449503    8788 system_pods.go:59] 6 kube-system pods found
	I0514 01:09:11.449503    8788 system_pods.go:61] "coredns-7db6d8ff4d-9ssxc" [a50f7aa7-22b6-4b44-86aa-bba35968ca6b] Running
	I0514 01:09:11.449503    8788 system_pods.go:61] "etcd-auto-204600" [a88faf6b-6b36-4f32-a559-75553032b986] Running
	I0514 01:09:11.449503    8788 system_pods.go:61] "kube-apiserver-auto-204600" [5d597342-10b3-4a26-b00c-b6b20b276ab4] Running
	I0514 01:09:11.449503    8788 system_pods.go:61] "kube-controller-manager-auto-204600" [18d47d96-bd08-4c2e-87d3-1652140ab6cf] Running
	I0514 01:09:11.449503    8788 system_pods.go:61] "kube-proxy-lmjhb" [fbc73802-4f22-4961-a610-2a7d525f1852] Running
	I0514 01:09:11.449503    8788 system_pods.go:61] "kube-scheduler-auto-204600" [9e713973-3bb0-4361-8a4c-4ab8453f6f84] Running
	I0514 01:09:11.449503    8788 system_pods.go:74] duration metric: took 161.9993ms to wait for pod list to return data ...
	I0514 01:09:11.449503    8788 default_sa.go:34] waiting for default service account to be created ...
	I0514 01:09:11.640939    8788 default_sa.go:45] found service account: "default"
	I0514 01:09:11.640939    8788 default_sa.go:55] duration metric: took 191.4229ms for default service account to be created ...
	I0514 01:09:11.640939    8788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0514 01:09:11.760418    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:11.769948    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:11.770033    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:13.807200    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:13.817386    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:13.817386    7260 machine.go:94] provisionDockerMachine start ...
	I0514 01:09:13.817477    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:11.851348    8788 system_pods.go:86] 6 kube-system pods found
	I0514 01:09:11.851348    8788 system_pods.go:89] "coredns-7db6d8ff4d-9ssxc" [a50f7aa7-22b6-4b44-86aa-bba35968ca6b] Running
	I0514 01:09:11.851348    8788 system_pods.go:89] "etcd-auto-204600" [a88faf6b-6b36-4f32-a559-75553032b986] Running
	I0514 01:09:11.851348    8788 system_pods.go:89] "kube-apiserver-auto-204600" [5d597342-10b3-4a26-b00c-b6b20b276ab4] Running
	I0514 01:09:11.851348    8788 system_pods.go:89] "kube-controller-manager-auto-204600" [18d47d96-bd08-4c2e-87d3-1652140ab6cf] Running
	I0514 01:09:11.851348    8788 system_pods.go:89] "kube-proxy-lmjhb" [fbc73802-4f22-4961-a610-2a7d525f1852] Running
	I0514 01:09:11.851348    8788 system_pods.go:89] "kube-scheduler-auto-204600" [9e713973-3bb0-4361-8a4c-4ab8453f6f84] Running
	I0514 01:09:11.851348    8788 system_pods.go:126] duration metric: took 210.3956ms to wait for k8s-apps to be running ...
	I0514 01:09:11.851348    8788 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 01:09:11.863490    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 01:09:11.889623    8788 system_svc.go:56] duration metric: took 38.2725ms WaitForService to wait for kubelet
	I0514 01:09:11.889732    8788 kubeadm.go:576] duration metric: took 5.8123385s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 01:09:11.889777    8788 node_conditions.go:102] verifying NodePressure condition ...
	I0514 01:09:12.033534    8788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 01:09:12.033655    8788 node_conditions.go:123] node cpu capacity is 2
	I0514 01:09:12.033655    8788 node_conditions.go:105] duration metric: took 143.8682ms to run NodePressure ...
	I0514 01:09:12.033655    8788 start.go:240] waiting for startup goroutines ...
	I0514 01:09:12.755576    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:12.755576    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:12.756253    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:13.125157    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:09:13.133001    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:13.133491    8788 sshutil.go:53] new ssh client: &{IP:172.23.105.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\auto-204600\id_rsa Username:docker}
	I0514 01:09:13.278014    8788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0514 01:09:15.116521    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:09:15.116627    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:15.116944    8788 sshutil.go:53] new ssh client: &{IP:172.23.105.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\auto-204600\id_rsa Username:docker}
	I0514 01:09:15.252207    8788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0514 01:09:15.455617    8788 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0514 01:09:15.457793    8788 addons.go:505] duration metric: took 9.3803276s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0514 01:09:15.457793    8788 start.go:245] waiting for cluster config update ...
	I0514 01:09:15.457793    8788 start.go:254] writing updated cluster config ...
	I0514 01:09:15.466957    8788 ssh_runner.go:195] Run: rm -f paused
	I0514 01:09:15.585460    8788 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0514 01:09:15.588870    8788 out.go:177] * Done! kubectl is now configured to use "auto-204600" cluster and "default" namespace by default
	I0514 01:09:15.799784    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:15.799784    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:15.799985    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:18.127941    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:18.133803    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:18.139592    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:18.149364    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:18.149364    7260 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 01:09:18.290368    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0514 01:09:18.290368    7260 buildroot.go:166] provisioning hostname "kindnet-204600"
	I0514 01:09:18.290538    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:20.219261    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:20.229414    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:20.229537    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:22.521225    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:22.521225    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:22.535851    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:22.536355    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:22.536355    7260 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-204600 && echo "kindnet-204600" | sudo tee /etc/hostname
	I0514 01:09:22.674787    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-204600
	
	I0514 01:09:22.674898    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:24.631892    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:24.631892    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:24.631892    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:26.938533    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:26.938533    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:26.947558    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:26.947558    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:26.947558    7260 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-204600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-204600/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-204600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 01:09:27.098960    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 01:09:27.099056    7260 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 01:09:27.099176    7260 buildroot.go:174] setting up certificates
	I0514 01:09:27.099176    7260 provision.go:84] configureAuth start
	I0514 01:09:27.099228    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:29.050369    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:29.050369    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:29.050369    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:31.325342    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:31.335505    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:31.335803    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:33.221308    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:33.221308    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:33.221442    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:35.459227    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:35.459227    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:35.459227    7260 provision.go:143] copyHostCerts
	I0514 01:09:35.469704    7260 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 01:09:35.469800    7260 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 01:09:35.470194    7260 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 01:09:35.471658    7260 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 01:09:35.471658    7260 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 01:09:35.472067    7260 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 01:09:35.473344    7260 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 01:09:35.473344    7260 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 01:09:35.473584    7260 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 01:09:35.474598    7260 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-204600 san=[127.0.0.1 172.23.99.4 kindnet-204600 localhost minikube]
	I0514 01:09:35.707150    7260 provision.go:177] copyRemoteCerts
	I0514 01:09:35.717391    7260 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 01:09:35.717391    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:37.603663    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:37.614915    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:37.614915    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:39.818524    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:39.818524    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:39.828708    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:09:39.912975    7260 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1953048s)
	I0514 01:09:39.928400    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 01:09:39.976611    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I0514 01:09:40.018458    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0514 01:09:40.057246    7260 provision.go:87] duration metric: took 12.9572066s to configureAuth
	I0514 01:09:40.061184    7260 buildroot.go:189] setting minikube options for container-runtime
	I0514 01:09:40.061796    7260 config.go:182] Loaded profile config "kindnet-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:09:40.061860    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:42.002199    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:42.011895    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:42.011895    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:44.239008    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:44.239008    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:44.243274    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:44.243643    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:44.243717    7260 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 01:09:44.369401    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 01:09:44.369401    7260 buildroot.go:70] root file system type: tmpfs
	I0514 01:09:44.369661    7260 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 01:09:44.369746    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:46.247716    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:46.257616    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:46.257708    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:48.611098    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:48.611098    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:48.620882    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:48.621685    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:48.621884    7260 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 01:09:48.768796    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 01:09:48.768877    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:50.742762    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:50.742762    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:50.752930    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:53.075215    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:53.075215    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:53.089183    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:53.089596    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:53.089596    7260 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 01:09:55.150816    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 01:09:55.150918    7260 machine.go:97] duration metric: took 41.3307245s to provisionDockerMachine
	I0514 01:09:55.150918    7260 client.go:171] duration metric: took 1m48.2277718s to LocalClient.Create
	I0514 01:09:55.150971    7260 start.go:167] duration metric: took 1m48.2278251s to libmachine.API.Create "kindnet-204600"
	I0514 01:09:55.151023    7260 start.go:293] postStartSetup for "kindnet-204600" (driver="hyperv")
	I0514 01:09:55.151023    7260 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 01:09:55.161359    7260 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 01:09:55.161359    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:57.133615    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:57.133615    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:57.133615    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:59.509011    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:59.509011    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:59.509343    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:09:59.607514    7260 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4458576s)
	I0514 01:09:59.616873    7260 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 01:09:59.623651    7260 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 01:09:59.623651    7260 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 01:09:59.624111    7260 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 01:09:59.624694    7260 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 01:09:59.633563    7260 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 01:09:59.654460    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 01:09:59.702904    7260 start.go:296] duration metric: took 4.5515772s for postStartSetup
	I0514 01:09:59.704855    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:01.694691    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:01.705180    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:01.705180    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:04.044546    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:04.044546    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:04.054540    7260 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\config.json ...
	I0514 01:10:04.056717    7260 start.go:128] duration metric: took 1m57.1370716s to createHost
	I0514 01:10:04.056717    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:06.001757    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:06.001757    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:06.011826    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:08.323878    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:08.323878    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:08.339059    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:08.339652    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:10:08.339652    7260 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 01:10:08.469641    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715649008.704828778
	
	I0514 01:10:08.469641    7260 fix.go:216] guest clock: 1715649008.704828778
	I0514 01:10:08.469641    7260 fix.go:229] Guest: 2024-05-14 01:10:08.704828778 +0000 UTC Remote: 2024-05-14 01:10:04.0567177 +0000 UTC m=+394.123370801 (delta=4.648111078s)
	I0514 01:10:08.469641    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:12.821332   14332 start.go:364] duration metric: took 4m34.6390057s to acquireMachinesLock for "pause-851700"
	I0514 01:10:12.822103   14332 start.go:96] Skipping create...Using existing machine configuration
	I0514 01:10:12.822221   14332 fix.go:54] fixHost starting: 
	I0514 01:10:12.823140   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:14.911268   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:14.911268   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:14.911268   14332 fix.go:112] recreateIfNeeded on pause-851700: state=Running err=<nil>
	W0514 01:10:14.911268   14332 fix.go:138] unexpected machine state, will restart: <nil>
	I0514 01:10:14.915701   14332 out.go:177] * Updating the running hyperv "pause-851700" VM ...
	I0514 01:10:10.380641    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:10.380641    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:10.396165    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:12.669176    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:12.679240    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:12.682860    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:12.683218    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:10:12.683306    7260 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715649008
	I0514 01:10:12.821332    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 01:10:08 UTC 2024
	
	I0514 01:10:12.821332    7260 fix.go:236] clock set: Tue May 14 01:10:08 UTC 2024
	 (err=<nil>)
	I0514 01:10:12.821332    7260 start.go:83] releasing machines lock for "kindnet-204600", held for 2m5.901902s
	I0514 01:10:12.821332    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:14.909118    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:14.909118    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:14.909214    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:14.918205   14332 machine.go:94] provisionDockerMachine start ...
	I0514 01:10:14.918349   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:16.977333   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:16.977333   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:16.977333   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:17.364670    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:17.364744    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:17.368492    7260 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 01:10:17.368492    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:17.381252    7260 ssh_runner.go:195] Run: cat /version.json
	I0514 01:10:17.381252    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:19.520155    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:19.526750    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:19.526750    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:19.542132    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:19.542132    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:19.551880    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:19.583159   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:19.583341   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:19.587157   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:19.587708   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:19.587824   14332 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 01:10:19.728093   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-851700
	
	I0514 01:10:19.728093   14332 buildroot.go:166] provisioning hostname "pause-851700"
	I0514 01:10:19.728093   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:21.861174   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:21.861174   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:21.869962   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:22.037603    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:22.047219    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:22.047662    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:10:22.068105    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:22.068105    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:22.068105    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:10:22.190882    7260 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8220664s)
	I0514 01:10:22.190882    7260 ssh_runner.go:235] Completed: cat /version.json: (4.809308s)
	I0514 01:10:22.199146    7260 ssh_runner.go:195] Run: systemctl --version
	I0514 01:10:22.217590    7260 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0514 01:10:22.232886    7260 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 01:10:22.241905    7260 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 01:10:22.275867    7260 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 01:10:22.275867    7260 start.go:494] detecting cgroup driver to use...
	I0514 01:10:22.275867    7260 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:10:22.322308    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 01:10:22.353728    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 01:10:22.378183    7260 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 01:10:22.392289    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 01:10:22.430762    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:10:22.469044    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 01:10:22.503620    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:10:22.532652    7260 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 01:10:22.561329    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 01:10:22.588010    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 01:10:22.615533    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 01:10:22.644526    7260 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 01:10:22.673652    7260 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 01:10:22.704166    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:22.906252    7260 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 01:10:22.938711    7260 start.go:494] detecting cgroup driver to use...
	I0514 01:10:22.948604    7260 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 01:10:22.980103    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:10:23.020852    7260 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 01:10:23.069011    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:10:23.105894    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:10:23.145278    7260 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 01:10:23.219649    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:10:23.247450    7260 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:10:23.287271    7260 ssh_runner.go:195] Run: which cri-dockerd
	I0514 01:10:23.302125    7260 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 01:10:23.319780    7260 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 01:10:23.368010    7260 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 01:10:23.589014    7260 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 01:10:23.777696    7260 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 01:10:23.777696    7260 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 01:10:23.828626    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:24.009326    7260 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:10:26.542848    7260 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5333528s)
	I0514 01:10:26.553345    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 01:10:26.589522    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:10:26.626809    7260 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 01:10:26.819914    7260 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 01:10:27.004894    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:27.196133    7260 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 01:10:27.233366    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:10:27.269138    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:27.497692    7260 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 01:10:27.607535    7260 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 01:10:27.617698    7260 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 01:10:27.628953    7260 start.go:562] Will wait 60s for crictl version
	I0514 01:10:27.641225    7260 ssh_runner.go:195] Run: which crictl
	I0514 01:10:27.658999    7260 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 01:10:27.714417    7260 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 01:10:27.724853    7260 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:10:27.768877    7260 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:10:24.227559   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:24.227559   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:24.242408   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:24.242753   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:24.242825   14332 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-851700 && echo "pause-851700" | sudo tee /etc/hostname
	I0514 01:10:24.403055   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-851700
	
	I0514 01:10:24.403055   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:26.366671   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:26.366671   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:26.367174   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:27.801216    7260 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 01:10:27.801273    7260 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 01:10:27.805814    7260 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 01:10:27.805814    7260 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 01:10:27.805814    7260 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 01:10:27.805814    7260 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 01:10:27.808854    7260 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 01:10:27.808854    7260 ip.go:210] interface addr: 172.23.96.1/20
	I0514 01:10:27.811796    7260 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 01:10:27.823507    7260 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 01:10:27.844120    7260 kubeadm.go:877] updating cluster {Name:kindnet-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-204600 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:172.23.99.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0514 01:10:27.844120    7260 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 01:10:27.852427    7260 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:10:27.872033    7260 docker.go:685] Got preloaded images: 
	I0514 01:10:27.872033    7260 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0514 01:10:27.880580    7260 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0514 01:10:27.906518    7260 ssh_runner.go:195] Run: which lz4
	I0514 01:10:27.921112    7260 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0514 01:10:27.928436    7260 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0514 01:10:27.928608    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0514 01:10:29.848665    7260 docker.go:649] duration metric: took 1.9355285s to copy over tarball
	I0514 01:10:29.858133    7260 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0514 01:10:28.992515   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:28.992515   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:28.996995   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:28.997533   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:28.997651   14332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-851700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-851700/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-851700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 01:10:29.170768   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 01:10:29.170842   14332 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 01:10:29.170985   14332 buildroot.go:174] setting up certificates
	I0514 01:10:29.170985   14332 provision.go:84] configureAuth start
	I0514 01:10:29.171126   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:31.414956   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:31.414956   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:31.415050   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:33.905433   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:33.905433   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:33.905757   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:35.988267   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:35.988511   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:35.988569   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:36.498206    7260 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.6396284s)
	I0514 01:10:36.498206    7260 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0514 01:10:36.564075    7260 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0514 01:10:38.212074    7260 ssh_runner.go:235] Completed: sudo cat /var/lib/docker/image/overlay2/repositories.json: (1.6478894s)
	I0514 01:10:38.212311    7260 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0514 01:10:38.255060    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:38.480019    7260 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:10:38.427972   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:38.428088   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:38.428088   14332 provision.go:143] copyHostCerts
	I0514 01:10:38.428541   14332 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 01:10:38.428541   14332 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 01:10:38.428991   14332 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 01:10:38.430327   14332 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 01:10:38.430327   14332 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 01:10:38.430755   14332 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 01:10:38.432105   14332 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 01:10:38.432193   14332 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 01:10:38.432562   14332 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 01:10:38.432879   14332 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-851700 san=[127.0.0.1 172.23.111.154 localhost minikube pause-851700]
	I0514 01:10:38.755420   14332 provision.go:177] copyRemoteCerts
	I0514 01:10:38.762813   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 01:10:38.762813   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:40.753825   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:40.754145   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:40.754424   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:43.135802   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:43.135802   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:43.136208   14332 sshutil.go:53] new ssh client: &{IP:172.23.111.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-851700\id_rsa Username:docker}
	I0514 01:10:42.840745    7260 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.3604331s)
	I0514 01:10:42.848377    7260 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:10:42.876276    7260 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0514 01:10:42.876323    7260 cache_images.go:84] Images are preloaded, skipping loading
	I0514 01:10:42.876323    7260 kubeadm.go:928] updating node { 172.23.99.4 8443 v1.30.0 docker true true} ...
	I0514 01:10:42.876488    7260 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-204600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.99.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kindnet-204600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0514 01:10:42.883326    7260 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0514 01:10:42.921695    7260 cni.go:84] Creating CNI manager for "kindnet"
	I0514 01:10:42.921695    7260 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0514 01:10:42.921695    7260 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.99.4 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-204600 NodeName:kindnet-204600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.99.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.99.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0514 01:10:42.921695    7260 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.99.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kindnet-204600"
	  kubeletExtraArgs:
	    node-ip: 172.23.99.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.99.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0514 01:10:42.932008    7260 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 01:10:42.951537    7260 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 01:10:42.960693    7260 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0514 01:10:42.977487    7260 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0514 01:10:43.010336    7260 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 01:10:43.041440    7260 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0514 01:10:43.081580    7260 ssh_runner.go:195] Run: grep 172.23.99.4	control-plane.minikube.internal$ /etc/hosts
	I0514 01:10:43.088064    7260 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.99.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 01:10:43.117920    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:43.316727    7260 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:10:43.347198    7260 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600 for IP: 172.23.99.4
	I0514 01:10:43.347247    7260 certs.go:194] generating shared ca certs ...
	I0514 01:10:43.347299    7260 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.348074    7260 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 01:10:43.348493    7260 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 01:10:43.348697    7260 certs.go:256] generating profile certs ...
	I0514 01:10:43.349478    7260 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.key
	I0514 01:10:43.349612    7260 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.crt with IP's: []
	I0514 01:10:43.550567    7260 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.crt ...
	I0514 01:10:43.550567    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.crt: {Name:mk447ec615ac7cfb9098663709e09f31e7a4c310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.550567    7260 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.key ...
	I0514 01:10:43.550567    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.key: {Name:mk3b43b340e5f30800ec094c7f26f77520de35c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.552099    7260 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key.239f16d8
	I0514 01:10:43.552099    7260 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt.239f16d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.99.4]
	I0514 01:10:43.706218    7260 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt.239f16d8 ...
	I0514 01:10:43.706320    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt.239f16d8: {Name:mk911640488392ebda6774ce8198951c32666df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.707397    7260 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key.239f16d8 ...
	I0514 01:10:43.707397    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key.239f16d8: {Name:mkb88ea0628e1097285c601ff90a8f1a7bc94dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.708081    7260 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt.239f16d8 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt
	I0514 01:10:43.718957    7260 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key.239f16d8 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key
	I0514 01:10:43.720054    7260 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.key
	I0514 01:10:43.720054    7260 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.crt with IP's: []
	I0514 01:10:43.869082    7260 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.crt ...
	I0514 01:10:43.869082    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.crt: {Name:mk83cdee94f5cafe180c7b2a365086694dc5d50d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.870240    7260 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.key ...
	I0514 01:10:43.870240    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.key: {Name:mk1ba62c1687114848064427cc837edbfc7f4d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.884191    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 01:10:43.884191    7260 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 01:10:43.884191    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 01:10:43.884191    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 01:10:43.884191    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 01:10:43.885195    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 01:10:43.885195    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 01:10:43.886198    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 01:10:43.932933    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 01:10:43.983918    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 01:10:44.030726    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 01:10:44.075750    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0514 01:10:44.122543    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0514 01:10:44.184814    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0514 01:10:44.250210    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0514 01:10:44.300040    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 01:10:44.348824    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 01:10:44.405653    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 01:10:44.461631    7260 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0514 01:10:44.510066    7260 ssh_runner.go:195] Run: openssl version
	I0514 01:10:44.530562    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 01:10:44.563076    7260 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 01:10:44.570027    7260 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 01:10:44.581440    7260 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 01:10:44.599449    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 01:10:44.628287    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 01:10:44.662425    7260 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:10:44.669390    7260 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:10:44.681682    7260 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:10:44.702734    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 01:10:44.734298    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 01:10:44.763750    7260 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 01:10:44.770999    7260 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 01:10:44.779980    7260 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 01:10:44.799907    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 01:10:44.828691    7260 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 01:10:44.836384    7260 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 01:10:44.836799    7260 kubeadm.go:391] StartCluster: {Name:kindnet-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-204600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:172.23.99.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 01:10:44.843481    7260 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 01:10:44.878035    7260 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0514 01:10:44.910366    7260 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0514 01:10:44.939428    7260 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0514 01:10:44.958307    7260 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0514 01:10:44.958307    7260 kubeadm.go:156] found existing configuration files:
	
	I0514 01:10:44.968390    7260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0514 01:10:44.987567    7260 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0514 01:10:44.996199    7260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0514 01:10:45.023764    7260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0514 01:10:45.040789    7260 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0514 01:10:43.248544   14332 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4853659s)
	I0514 01:10:43.248783   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 01:10:43.300337   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0514 01:10:43.365256   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0514 01:10:43.426991   14332 provision.go:87] duration metric: took 14.2550506s to configureAuth
	I0514 01:10:43.426991   14332 buildroot.go:189] setting minikube options for container-runtime
	I0514 01:10:43.427983   14332 config.go:182] Loaded profile config "pause-851700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:10:43.427983   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:45.588673   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:45.588721   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:45.588775   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:48.032161   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:48.032161   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:48.039894   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:48.040515   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:48.040515   14332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 01:10:45.059219    7260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0514 01:10:45.091232    7260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0514 01:10:45.108284    7260 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0514 01:10:45.117900    7260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0514 01:10:45.145011    7260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0514 01:10:45.162455    7260 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0514 01:10:45.171554    7260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0514 01:10:45.189632    7260 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0514 01:10:45.445222    7260 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0514 01:10:48.178721   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 01:10:48.178780   14332 buildroot.go:70] root file system type: tmpfs
	I0514 01:10:48.179073   14332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 01:10:48.179185   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:50.204433   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:50.204433   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:50.205419   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:52.691891   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:52.691891   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:52.697571   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:52.698068   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:52.698213   14332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 01:10:52.875889   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 01:10:52.875889   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:54.999366   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:54.999366   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:54.999366   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:57.407373   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:57.407838   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:57.413611   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:57.414285   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:57.414285   14332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 01:10:57.576560   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 01:10:57.576560   14332 machine.go:97] duration metric: took 42.6554008s to provisionDockerMachine
	I0514 01:10:57.576560   14332 start.go:293] postStartSetup for "pause-851700" (driver="hyperv")
	I0514 01:10:57.576560   14332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 01:10:57.585609   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 01:10:57.585609   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:58.606534    7260 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0514 01:10:58.606656    7260 kubeadm.go:309] [preflight] Running pre-flight checks
	I0514 01:10:58.606881    7260 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0514 01:10:58.607159    7260 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0514 01:10:58.607497    7260 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0514 01:10:58.607648    7260 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0514 01:10:58.610302    7260 out.go:204]   - Generating certificates and keys ...
	I0514 01:10:58.610447    7260 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0514 01:10:58.610560    7260 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0514 01:10:58.610787    7260 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0514 01:10:58.610920    7260 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0514 01:10:58.610989    7260 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0514 01:10:58.611097    7260 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0514 01:10:58.611223    7260 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0514 01:10:58.611407    7260 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-204600 localhost] and IPs [172.23.99.4 127.0.0.1 ::1]
	I0514 01:10:58.611593    7260 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0514 01:10:58.612042    7260 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-204600 localhost] and IPs [172.23.99.4 127.0.0.1 ::1]
	I0514 01:10:58.612042    7260 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0514 01:10:58.612042    7260 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0514 01:10:58.612042    7260 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0514 01:10:58.612732    7260 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0514 01:10:58.612860    7260 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0514 01:10:58.612974    7260 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0514 01:10:58.613100    7260 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0514 01:10:58.613312    7260 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0514 01:10:58.613495    7260 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0514 01:10:58.613495    7260 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0514 01:10:58.613495    7260 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0514 01:10:58.616693    7260 out.go:204]   - Booting up control plane ...
	I0514 01:10:58.616693    7260 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0514 01:10:58.617548    7260 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0514 01:10:58.617805    7260 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0514 01:10:58.618208    7260 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0514 01:10:58.618353    7260 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0514 01:10:58.618353    7260 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0514 01:10:58.618353    7260 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0514 01:10:58.618967    7260 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0514 01:10:58.619054    7260 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001590601s
	I0514 01:10:58.619159    7260 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0514 01:10:58.619159    7260 kubeadm.go:309] [api-check] The API server is healthy after 7.002478877s
	I0514 01:10:58.619159    7260 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0514 01:10:58.619714    7260 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0514 01:10:58.619997    7260 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0514 01:10:58.619997    7260 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-204600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0514 01:10:58.620682    7260 kubeadm.go:309] [bootstrap-token] Using token: bh2oij.hcns315mms4vj5zn
	I0514 01:10:58.624810    7260 out.go:204]   - Configuring RBAC rules ...
	I0514 01:10:58.625001    7260 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0514 01:10:58.625200    7260 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0514 01:10:58.625200    7260 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0514 01:10:58.625200    7260 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0514 01:10:58.626012    7260 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0514 01:10:58.626109    7260 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0514 01:10:58.626600    7260 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0514 01:10:58.626729    7260 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0514 01:10:58.626790    7260 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0514 01:10:58.626868    7260 kubeadm.go:309] 
	I0514 01:10:58.626930    7260 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0514 01:10:58.626930    7260 kubeadm.go:309] 
	I0514 01:10:58.626995    7260 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0514 01:10:58.626995    7260 kubeadm.go:309] 
	I0514 01:10:58.626995    7260 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0514 01:10:58.627220    7260 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0514 01:10:58.627296    7260 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0514 01:10:58.627296    7260 kubeadm.go:309] 
	I0514 01:10:58.627389    7260 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0514 01:10:58.627389    7260 kubeadm.go:309] 
	I0514 01:10:58.627452    7260 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0514 01:10:58.627452    7260 kubeadm.go:309] 
	I0514 01:10:58.627508    7260 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0514 01:10:58.627640    7260 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0514 01:10:58.627701    7260 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0514 01:10:58.627701    7260 kubeadm.go:309] 
	I0514 01:10:58.627823    7260 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0514 01:10:58.627950    7260 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0514 01:10:58.627950    7260 kubeadm.go:309] 
	I0514 01:10:58.628113    7260 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bh2oij.hcns315mms4vj5zn \
	I0514 01:10:58.628310    7260 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 \
	I0514 01:10:58.628370    7260 kubeadm.go:309] 	--control-plane 
	I0514 01:10:58.628455    7260 kubeadm.go:309] 
	I0514 01:10:58.628601    7260 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0514 01:10:58.628601    7260 kubeadm.go:309] 
	I0514 01:10:58.628733    7260 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bh2oij.hcns315mms4vj5zn \
	I0514 01:10:58.628929    7260 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0514 01:10:58.628994    7260 cni.go:84] Creating CNI manager for "kindnet"
	I0514 01:10:58.631014    7260 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0514 01:10:58.642719    7260 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0514 01:10:58.652672    7260 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0514 01:10:58.652672    7260 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0514 01:10:58.704614    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0514 01:10:59.070161    7260 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0514 01:10:59.080484    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:10:59.083116    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-204600 minikube.k8s.io/updated_at=2024_05_14T01_10_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=kindnet-204600 minikube.k8s.io/primary=true
	I0514 01:10:59.090082    7260 ops.go:34] apiserver oom_adj: -16
	I0514 01:10:59.234929    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:10:59.747145    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:10:59.677012   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:59.677451   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:59.677451   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:02.069619   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:02.069619   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:02.070048   14332 sshutil.go:53] new ssh client: &{IP:172.23.111.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-851700\id_rsa Username:docker}
	I0514 01:11:02.190079   14332 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6040499s)
	I0514 01:11:02.197949   14332 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 01:11:02.205403   14332 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 01:11:02.205438   14332 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 01:11:02.205438   14332 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 01:11:02.206398   14332 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 01:11:02.214998   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 01:11:02.239327   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 01:11:02.297683   14332 start.go:296] duration metric: took 4.7208061s for postStartSetup
	I0514 01:11:02.297683   14332 fix.go:56] duration metric: took 49.4722639s for fixHost
	I0514 01:11:02.297683   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:11:00.234360    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:00.741205    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:01.252042    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:01.738136    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:02.240993    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:02.754950    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:03.237699    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:03.746478    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:04.236777    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:04.738529    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:04.386721   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:04.387203   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:04.387578   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:06.809630   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:06.809630   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:06.814226   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:11:06.814650   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:11:06.814650   14332 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 01:11:06.955236   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715649067.199006560
	
	I0514 01:11:06.955350   14332 fix.go:216] guest clock: 1715649067.199006560
	I0514 01:11:06.955350   14332 fix.go:229] Guest: 2024-05-14 01:11:07.19900656 +0000 UTC Remote: 2024-05-14 01:11:02.2976836 +0000 UTC m=+329.263475201 (delta=4.90132296s)
	I0514 01:11:06.955488   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:11:05.247554    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:05.740457    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:06.246371    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:06.737217    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:07.242065    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:07.735442    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:08.239331    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:08.742830    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:09.249739    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:09.740793    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:11.558829     744 start.go:364] duration metric: took 3m16.9120997s to acquireMachinesLock for "calico-204600"
	I0514 01:11:11.559247     744 start.go:93] Provisioning new machine with config: &{Name:calico-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-204600 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 01:11:11.559247     744 start.go:125] createHost starting for "" (driver="hyperv")
	I0514 01:11:10.248843    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:10.745666    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:11.247226    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:11.748765    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:11.917542    7260 kubeadm.go:1107] duration metric: took 12.8465186s to wait for elevateKubeSystemPrivileges
	W0514 01:11:11.917542    7260 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0514 01:11:11.917542    7260 kubeadm.go:393] duration metric: took 27.0789828s to StartCluster
	I0514 01:11:11.917542    7260 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:11:11.917542    7260 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 01:11:11.920548    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:11:11.922548    7260 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0514 01:11:11.922548    7260 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0514 01:11:11.922548    7260 start.go:234] Will wait 15m0s for node &{Name: IP:172.23.99.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 01:11:11.922548    7260 addons.go:69] Setting storage-provisioner=true in profile "kindnet-204600"
	I0514 01:11:11.925546    7260 out.go:177] * Verifying Kubernetes components...
	I0514 01:11:11.922548    7260 addons.go:69] Setting default-storageclass=true in profile "kindnet-204600"
	I0514 01:11:11.922548    7260 addons.go:234] Setting addon storage-provisioner=true in "kindnet-204600"
	I0514 01:11:11.922548    7260 config.go:182] Loaded profile config "kindnet-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:11:08.988103   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:08.988103   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:08.988828   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:11.394561   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:11.394561   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:11.400109   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:11:11.400961   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:11:11.401029   14332 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715649066
	I0514 01:11:11.558145   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 01:11:06 UTC 2024
	
	I0514 01:11:11.558145   14332 fix.go:236] clock set: Tue May 14 01:11:06 UTC 2024
	 (err=<nil>)
	I0514 01:11:11.558145   14332 start.go:83] releasing machines lock for "pause-851700", held for 58.7323315s
	I0514 01:11:11.558145   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:11:11.562752     744 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0514 01:11:11.562752     744 start.go:159] libmachine.API.Create for "calico-204600" (driver="hyperv")
	I0514 01:11:11.562752     744 client.go:168] LocalClient.Create starting
	I0514 01:11:11.563746     744 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0514 01:11:11.563746     744 main.go:141] libmachine: Decoding PEM data...
	I0514 01:11:11.563746     744 main.go:141] libmachine: Parsing certificate...
	I0514 01:11:11.563746     744 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0514 01:11:11.563746     744 main.go:141] libmachine: Decoding PEM data...
	I0514 01:11:11.563746     744 main.go:141] libmachine: Parsing certificate...
	I0514 01:11:11.563746     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0514 01:11:13.999122     744 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0514 01:11:13.999747     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:13.999747     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0514 01:11:11.925546    7260 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-204600"
	I0514 01:11:11.925546    7260 host.go:66] Checking if "kindnet-204600" exists ...
	I0514 01:11:11.929548    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:11:11.929548    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:11:11.945834    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:12.341622    7260 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0514 01:11:12.531903    7260 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:11:13.074037    7260 start.go:946] {"host.minikube.internal": 172.23.96.1} host record injected into CoreDNS's ConfigMap
	I0514 01:11:13.078717    7260 node_ready.go:35] waiting up to 15m0s for node "kindnet-204600" to be "Ready" ...
	I0514 01:11:13.603219    7260 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-204600" context rescaled to 1 replicas
	I0514 01:11:14.769804    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:14.769804    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:14.771995    7260 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0514 01:11:14.775269    7260 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0514 01:11:14.775269    7260 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0514 01:11:14.775269    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:11:14.805525    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:14.805525    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:14.807684    7260 addons.go:234] Setting addon default-storageclass=true in "kindnet-204600"
	I0514 01:11:14.807872    7260 host.go:66] Checking if "kindnet-204600" exists ...
	I0514 01:11:14.808877    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:11:14.329898   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:14.330092   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:14.330152   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:17.629166   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:17.629236   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:17.632900   14332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 01:11:17.632900   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:11:17.648771   14332 ssh_runner.go:195] Run: cat /version.json
	I0514 01:11:17.648771   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:11:16.322054     744 main.go:141] libmachine: [stdout =====>] : False
	
	I0514 01:11:16.322415     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:16.322415     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0514 01:11:18.166886     744 main.go:141] libmachine: [stdout =====>] : True
	
	I0514 01:11:18.167885     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:18.167885     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0514 01:11:15.100642    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:17.540567    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:17.540567    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:17.540567    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:17.540567    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:17.540567    7260 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0514 01:11:17.540567    7260 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0514 01:11:17.541580    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:11:17.541580    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:17.596609    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:20.373013   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:20.373152   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:20.373342   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:20.376564   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:20.376564   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:20.376564   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:20.100006    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:20.345685    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:20.345685    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:20.345685    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:20.696662    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:11:20.696662    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:20.697172    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:11:20.966855    7260 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0514 01:11:22.254590    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:22.375196    7260 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4082463s)
	I0514 01:11:23.372382    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:11:23.372382    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:23.373538    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:11:23.529801    7260 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0514 01:11:23.728692    7260 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0514 01:11:23.035743     744 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0514 01:11:23.035743     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:23.037971     744 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0514 01:11:23.380896     744 main.go:141] libmachine: Creating SSH key...
	I0514 01:11:23.696596     744 main.go:141] libmachine: Creating VM...
	I0514 01:11:23.696596     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0514 01:11:23.730819    7260 addons.go:505] duration metric: took 11.8074794s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0514 01:11:24.592592    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:23.225173   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:23.225173   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:23.225856   14332 sshutil.go:53] new ssh client: &{IP:172.23.111.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-851700\id_rsa Username:docker}
	I0514 01:11:23.291941   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:23.292042   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:23.292631   14332 sshutil.go:53] new ssh client: &{IP:172.23.111.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-851700\id_rsa Username:docker}
	I0514 01:11:25.323341   14332 ssh_runner.go:235] Completed: cat /version.json: (7.6739533s)
	I0514 01:11:25.323421   14332 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.6899247s)
	W0514 01:11:25.323421   14332 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0514 01:11:25.324085   14332 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0514 01:11:25.324085   14332 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0514 01:11:25.333326   14332 ssh_runner.go:195] Run: systemctl --version
	I0514 01:11:25.353326   14332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0514 01:11:25.363137   14332 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 01:11:25.374635   14332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 01:11:25.397159   14332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0514 01:11:25.397159   14332 start.go:494] detecting cgroup driver to use...
	I0514 01:11:25.397159   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:11:25.453746   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 01:11:25.482743   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 01:11:25.507103   14332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 01:11:25.519812   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 01:11:25.556261   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:11:25.590710   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 01:11:25.623667   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:11:25.658914   14332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 01:11:25.691844   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 01:11:25.729682   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 01:11:25.768560   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 01:11:25.800694   14332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 01:11:25.834878   14332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 01:11:25.870474   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:26.130380   14332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 01:11:26.161369   14332 start.go:494] detecting cgroup driver to use...
	I0514 01:11:26.170904   14332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 01:11:26.204385   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:11:26.235750   14332 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 01:11:26.273627   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:11:26.305869   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:11:26.330914   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:11:26.380216   14332 ssh_runner.go:195] Run: which cri-dockerd
	I0514 01:11:26.404605   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 01:11:26.430070   14332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 01:11:26.478874   14332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 01:11:26.782949   14332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 01:11:27.047745   14332 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 01:11:27.047745   14332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 01:11:27.102874   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:27.360714   14332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:11:26.654945     744 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0514 01:11:26.655105     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:26.655353     744 main.go:141] libmachine: Using switch "Default Switch"
	I0514 01:11:26.655581     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0514 01:11:28.381263     744 main.go:141] libmachine: [stdout =====>] : True
	
	I0514 01:11:28.381263     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:28.382217     744 main.go:141] libmachine: Creating VHD
	I0514 01:11:28.382217     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0514 01:11:26.598938    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:28.086004    7260 node_ready.go:49] node "kindnet-204600" has status "Ready":"True"
	I0514 01:11:28.086004    7260 node_ready.go:38] duration metric: took 15.0062801s for node "kindnet-204600" to be "Ready" ...
	I0514 01:11:28.086004    7260 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:11:28.099306    7260 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-g9c6b" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.119549    7260 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9c6b" in "kube-system" namespace has status "Ready":"False"
	I0514 01:11:30.608479    7260 pod_ready.go:92] pod "coredns-7db6d8ff4d-g9c6b" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:30.609013    7260 pod_ready.go:81] duration metric: took 2.5095379s for pod "coredns-7db6d8ff4d-g9c6b" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.609061    7260 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.616764    7260 pod_ready.go:92] pod "etcd-kindnet-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:30.616816    7260 pod_ready.go:81] duration metric: took 7.7113ms for pod "etcd-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.616867    7260 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.624296    7260 pod_ready.go:92] pod "kube-apiserver-kindnet-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:30.624385    7260 pod_ready.go:81] duration metric: took 7.5183ms for pod "kube-apiserver-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.624385    7260 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.630738    7260 pod_ready.go:92] pod "kube-controller-manager-kindnet-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:30.630738    7260 pod_ready.go:81] duration metric: took 6.3518ms for pod "kube-controller-manager-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.630840    7260 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-9k6gx" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.637022    7260 pod_ready.go:92] pod "kube-proxy-9k6gx" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:30.637022    7260 pod_ready.go:81] duration metric: took 6.181ms for pod "kube-proxy-9k6gx" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.637022    7260 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:31.015166    7260 pod_ready.go:92] pod "kube-scheduler-kindnet-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:31.015262    7260 pod_ready.go:81] duration metric: took 378.2154ms for pod "kube-scheduler-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:31.015262    7260 pod_ready.go:38] duration metric: took 2.929062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:11:31.015423    7260 api_server.go:52] waiting for apiserver process to appear ...
	I0514 01:11:31.024474    7260 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:31.051415    7260 api_server.go:72] duration metric: took 19.1275836s to wait for apiserver process to appear ...
	I0514 01:11:31.051415    7260 api_server.go:88] waiting for apiserver healthz status ...
	I0514 01:11:31.051557    7260 api_server.go:253] Checking apiserver healthz at https://172.23.99.4:8443/healthz ...
	I0514 01:11:31.058236    7260 api_server.go:279] https://172.23.99.4:8443/healthz returned 200:
	ok
	I0514 01:11:31.060098    7260 api_server.go:141] control plane version: v1.30.0
	I0514 01:11:31.060154    7260 api_server.go:131] duration metric: took 8.5971ms to wait for apiserver health ...
	I0514 01:11:31.060154    7260 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 01:11:31.227120    7260 system_pods.go:59] 8 kube-system pods found
	I0514 01:11:31.227220    7260 system_pods.go:61] "coredns-7db6d8ff4d-g9c6b" [14fe8949-2d6f-4cc4-875a-41906f555bb8] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "etcd-kindnet-204600" [e223fa6d-7886-4c3f-9fb6-d62b585aa2e5] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "kindnet-cfmvs" [b7d81597-9401-4ec6-8ea6-b8896d7c01ee] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "kube-apiserver-kindnet-204600" [7443477a-7ade-4949-aa52-2f8c64653fa3] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "kube-controller-manager-kindnet-204600" [da7d4ca0-9ce7-4321-aee0-11feae96f366] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "kube-proxy-9k6gx" [fbc00844-bd79-4bc5-8a77-92dd79a5ab69] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "kube-scheduler-kindnet-204600" [7c26b954-6434-4f90-946a-cadb9459e8e1] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "storage-provisioner" [30aca202-5988-46db-b78b-5a14a898ecc0] Running
	I0514 01:11:31.227220    7260 system_pods.go:74] duration metric: took 167.054ms to wait for pod list to return data ...
	I0514 01:11:31.227220    7260 default_sa.go:34] waiting for default service account to be created ...
	I0514 01:11:31.408666    7260 default_sa.go:45] found service account: "default"
	I0514 01:11:31.408666    7260 default_sa.go:55] duration metric: took 181.4342ms for default service account to be created ...
	I0514 01:11:31.408666    7260 system_pods.go:116] waiting for k8s-apps to be running ...
	I0514 01:11:31.618504    7260 system_pods.go:86] 8 kube-system pods found
	I0514 01:11:31.618504    7260 system_pods.go:89] "coredns-7db6d8ff4d-g9c6b" [14fe8949-2d6f-4cc4-875a-41906f555bb8] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "etcd-kindnet-204600" [e223fa6d-7886-4c3f-9fb6-d62b585aa2e5] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "kindnet-cfmvs" [b7d81597-9401-4ec6-8ea6-b8896d7c01ee] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "kube-apiserver-kindnet-204600" [7443477a-7ade-4949-aa52-2f8c64653fa3] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "kube-controller-manager-kindnet-204600" [da7d4ca0-9ce7-4321-aee0-11feae96f366] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "kube-proxy-9k6gx" [fbc00844-bd79-4bc5-8a77-92dd79a5ab69] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "kube-scheduler-kindnet-204600" [7c26b954-6434-4f90-946a-cadb9459e8e1] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "storage-provisioner" [30aca202-5988-46db-b78b-5a14a898ecc0] Running
	I0514 01:11:31.618574    7260 system_pods.go:126] duration metric: took 209.8939ms to wait for k8s-apps to be running ...
	I0514 01:11:31.618574    7260 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 01:11:31.628490    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 01:11:31.652285    7260 system_svc.go:56] duration metric: took 33.6058ms WaitForService to wait for kubelet
	I0514 01:11:31.652285    7260 kubeadm.go:576] duration metric: took 19.7284135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 01:11:31.652384    7260 node_conditions.go:102] verifying NodePressure condition ...
	I0514 01:11:31.807374    7260 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 01:11:31.807482    7260 node_conditions.go:123] node cpu capacity is 2
	I0514 01:11:31.807482    7260 node_conditions.go:105] duration metric: took 155.0873ms to run NodePressure ...
	I0514 01:11:31.807482    7260 start.go:240] waiting for startup goroutines ...
	I0514 01:11:31.807482    7260 start.go:245] waiting for cluster config update ...
	I0514 01:11:31.807482    7260 start.go:254] writing updated cluster config ...
	I0514 01:11:31.816289    7260 ssh_runner.go:195] Run: rm -f paused
	I0514 01:11:31.939225    7260 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0514 01:11:31.945228    7260 out.go:177] * Done! kubectl is now configured to use "kindnet-204600" cluster and "default" namespace by default
	I0514 01:11:32.075105     744 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 91817EFD-298A-4F06-B898-93D1B41E87FD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0514 01:11:32.075105     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:32.075105     744 main.go:141] libmachine: Writing magic tar header
	I0514 01:11:32.075209     744 main.go:141] libmachine: Writing SSH key tar header
	I0514 01:11:32.083358     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0514 01:11:35.191325     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:35.192286     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:35.192286     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\disk.vhd' -SizeBytes 20000MB
	I0514 01:11:37.636592     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:37.636592     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:37.636592     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM calico-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0514 01:11:40.399684   14332 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.0369443s)
	I0514 01:11:40.413127   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 01:11:40.462182   14332 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0514 01:11:40.530497   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:11:40.572672   14332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 01:11:40.799257   14332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 01:11:41.040161   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:41.260173   14332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 01:11:41.308922   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:11:41.342730   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:41.578325   14332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 01:11:41.733566   14332 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 01:11:41.747054   14332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 01:11:41.767793   14332 start.go:562] Will wait 60s for crictl version
	I0514 01:11:41.778790   14332 ssh_runner.go:195] Run: which crictl
	I0514 01:11:41.807433   14332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 01:11:41.873479   14332 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 01:11:41.880479   14332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:11:41.924471   14332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:11:41.975835   14332 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 01:11:41.976024   14332 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 01:11:41.980632   14332 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 01:11:41.980632   14332 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 01:11:41.980632   14332 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 01:11:41.980632   14332 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 01:11:41.983734   14332 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 01:11:41.983734   14332 ip.go:210] interface addr: 172.23.96.1/20
	I0514 01:11:41.992765   14332 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 01:11:42.002212   14332 kubeadm.go:877] updating cluster {Name:pause-851700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-851700 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.111.154 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0514 01:11:42.002212   14332 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 01:11:42.011915   14332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:11:42.039415   14332 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0514 01:11:42.039494   14332 docker.go:615] Images already preloaded, skipping extraction
	I0514 01:11:42.047964   14332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:11:42.074506   14332 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0514 01:11:42.074506   14332 cache_images.go:84] Images are preloaded, skipping loading
	I0514 01:11:42.074506   14332 kubeadm.go:928] updating node { 172.23.111.154 8443 v1.30.0 docker true true} ...
	I0514 01:11:42.074506   14332 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-851700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.111.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-851700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0514 01:11:42.083992   14332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0514 01:11:42.119963   14332 cni.go:84] Creating CNI manager for ""
	I0514 01:11:42.120051   14332 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0514 01:11:42.120051   14332 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0514 01:11:42.120146   14332 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.111.154 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-851700 NodeName:pause-851700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.111.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.111.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0514 01:11:42.120322   14332 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.111.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-851700"
	  kubeletExtraArgs:
	    node-ip: 172.23.111.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.111.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0514 01:11:42.131148   14332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 01:11:42.151031   14332 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 01:11:42.165837   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0514 01:11:42.185242   14332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0514 01:11:42.218813   14332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 01:11:42.253832   14332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0514 01:11:42.297217   14332 ssh_runner.go:195] Run: grep 172.23.111.154	control-plane.minikube.internal$ /etc/hosts
	I0514 01:11:42.318126   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:42.619893   14332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:11:42.669123   14332 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700 for IP: 172.23.111.154
	I0514 01:11:42.669197   14332 certs.go:194] generating shared ca certs ...
	I0514 01:11:42.669197   14332 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:11:42.669837   14332 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 01:11:42.669837   14332 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 01:11:42.670448   14332 certs.go:256] generating profile certs ...
	I0514 01:11:42.671214   14332 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\client.key
	I0514 01:11:42.671641   14332 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\apiserver.key.0c09c35c
	I0514 01:11:42.672060   14332 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\proxy-client.key
	I0514 01:11:42.673278   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 01:11:42.673833   14332 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 01:11:42.674042   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 01:11:42.674275   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 01:11:42.674681   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 01:11:42.675024   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 01:11:42.675621   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 01:11:42.677804   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 01:11:42.774965   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 01:11:42.849727   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 01:11:42.923725   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 01:11:42.990331   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0514 01:11:43.049736   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0514 01:11:43.101781   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0514 01:11:41.256720     744 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	calico-204600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0514 01:11:41.256720     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:41.256720     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName calico-204600 -DynamicMemoryEnabled $false
	I0514 01:11:43.779833     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:43.779833     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:43.779833     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor calico-204600 -Count 2
	I0514 01:11:43.180030   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0514 01:11:43.261761   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 01:11:43.321809   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 01:11:43.403140   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 01:11:43.481818   14332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0514 01:11:43.537331   14332 ssh_runner.go:195] Run: openssl version
	I0514 01:11:43.557614   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 01:11:43.598548   14332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 01:11:43.609063   14332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 01:11:43.618911   14332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 01:11:43.643552   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 01:11:43.721054   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 01:11:43.757838   14332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 01:11:43.764838   14332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 01:11:43.778835   14332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 01:11:43.820776   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 01:11:43.857792   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 01:11:43.912009   14332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:11:43.927755   14332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:11:43.942026   14332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:11:43.968029   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 01:11:44.016037   14332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 01:11:44.034768   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0514 01:11:44.058495   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0514 01:11:44.079118   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0514 01:11:44.098002   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0514 01:11:44.122882   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0514 01:11:44.155889   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0514 01:11:44.171511   14332 kubeadm.go:391] StartCluster: {Name:pause-851700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-851700 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.111.154 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:fal
se registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 01:11:44.184352   14332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 01:11:44.243807   14332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0514 01:11:44.279197   14332 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0514 01:11:44.279197   14332 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0514 01:11:44.279197   14332 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0514 01:11:44.292823   14332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0514 01:11:44.333410   14332 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0514 01:11:44.334859   14332 kubeconfig.go:125] found "pause-851700" server: "https://172.23.111.154:8443"
	I0514 01:11:44.338795   14332 kapi.go:59] client config for pause-851700: &rest.Config{Host:"https://172.23.111.154:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\pause-851700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\pause-851700\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 01:11:44.350812   14332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0514 01:11:44.381795   14332 kubeadm.go:624] The running cluster does not require reconfiguration: 172.23.111.154
	I0514 01:11:44.382800   14332 kubeadm.go:1154] stopping kube-system containers ...
	I0514 01:11:44.391837   14332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 01:11:44.480445   14332 docker.go:483] Stopping containers: [6088c2f87d78 3aa29f1051a6 07a402b65f7b 0546d4d05920 18eaec56489e 798a552412b8 f132fb594539 f388a99b7b43 5e24fe2e11bc c10f377eb282 478154bf5b5d 622c6ea48abc 7f4ef90b527b 42e4b7e0c0f9 9ed92f927933 d83b1ad1e1b8 d811e1abea1c bf69bb42be15 193d347f287d 7bd3613875f3 57d32ddf206f e8320cd44a55 e5c2689660d3 e2620eeb5a5e 393373d0eda5]
	I0514 01:11:44.491102   14332 ssh_runner.go:195] Run: docker stop 6088c2f87d78 3aa29f1051a6 07a402b65f7b 0546d4d05920 18eaec56489e 798a552412b8 f132fb594539 f388a99b7b43 5e24fe2e11bc c10f377eb282 478154bf5b5d 622c6ea48abc 7f4ef90b527b 42e4b7e0c0f9 9ed92f927933 d83b1ad1e1b8 d811e1abea1c bf69bb42be15 193d347f287d 7bd3613875f3 57d32ddf206f e8320cd44a55 e5c2689660d3 e2620eeb5a5e 393373d0eda5
	I0514 01:11:45.431991   14332 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0514 01:11:45.532784   14332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0514 01:11:45.559621   14332 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 May 14 01:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May 14 01:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 May 14 01:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 May 14 01:04 /etc/kubernetes/scheduler.conf
	
	I0514 01:11:45.569676   14332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0514 01:11:45.602764   14332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0514 01:11:45.640275   14332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0514 01:11:45.678948   14332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0514 01:11:45.687924   14332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0514 01:11:45.713930   14332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0514 01:11:45.731769   14332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0514 01:11:45.741425   14332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0514 01:11:45.770753   14332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0514 01:11:45.792625   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:45.899026   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:47.026918   14332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1277378s)
	I0514 01:11:47.026918   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:47.352595   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:47.493976   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:47.639965   14332 api_server.go:52] waiting for apiserver process to appear ...
	I0514 01:11:47.655817   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:48.165248   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:46.229057     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:46.229368     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:46.229437     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName calico-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\boot2docker.iso'
	I0514 01:11:48.890743     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:48.890743     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:48.891521     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName calico-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\disk.vhd'
	I0514 01:11:48.652761   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:49.151491   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:49.661443   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:49.694263   14332 api_server.go:72] duration metric: took 2.0541028s to wait for apiserver process to appear ...
	I0514 01:11:49.694316   14332 api_server.go:88] waiting for apiserver healthz status ...
	I0514 01:11:49.694374   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:51.559622     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:51.559675     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:51.559675     744 main.go:141] libmachine: Starting VM...
	I0514 01:11:51.559758     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM calico-204600
	I0514 01:11:53.705984   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0514 01:11:53.706380   14332 api_server.go:103] status: https://172.23.111.154:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0514 01:11:53.706380   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:53.778025   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0514 01:11:53.778025   14332 api_server.go:103] status: https://172.23.111.154:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0514 01:11:54.202905   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:54.211460   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 01:11:54.211604   14332 api_server.go:103] status: https://172.23.111.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 01:11:54.708616   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:54.717589   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 01:11:54.717589   14332 api_server.go:103] status: https://172.23.111.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 01:11:55.196866   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:55.222580   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 01:11:55.223265   14332 api_server.go:103] status: https://172.23.111.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 01:11:55.703049   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:55.710266   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 200:
	ok
	I0514 01:11:55.728100   14332 api_server.go:141] control plane version: v1.30.0
	I0514 01:11:55.728100   14332 api_server.go:131] duration metric: took 6.0333783s to wait for apiserver health ...
	I0514 01:11:55.728100   14332 cni.go:84] Creating CNI manager for ""
	I0514 01:11:55.728100   14332 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0514 01:11:55.731229   14332 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0514 01:11:55.742029   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0514 01:11:55.770353   14332 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0514 01:11:55.825530   14332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 01:11:55.854958   14332 system_pods.go:59] 6 kube-system pods found
	I0514 01:11:55.854958   14332 system_pods.go:61] "coredns-7db6d8ff4d-ntqd5" [10fdf7e7-0874-4abd-911e-88f6950f220a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0514 01:11:55.854958   14332 system_pods.go:61] "etcd-pause-851700" [8f211517-c814-49ef-ac6c-f22b10e36b62] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0514 01:11:55.854958   14332 system_pods.go:61] "kube-apiserver-pause-851700" [7bd68de3-ee66-48ce-899b-a7be9c13339c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0514 01:11:55.854958   14332 system_pods.go:61] "kube-controller-manager-pause-851700" [1dfabfcc-5216-403e-bc07-ca5f978e5435] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0514 01:11:55.854958   14332 system_pods.go:61] "kube-proxy-8qgfs" [0214f901-7bdf-4eab-81a1-5f041f2be6c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0514 01:11:55.854958   14332 system_pods.go:61] "kube-scheduler-pause-851700" [e1db2a1e-d04b-45ff-9ee0-f1fcf52b420f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0514 01:11:55.854958   14332 system_pods.go:74] duration metric: took 29.4267ms to wait for pod list to return data ...
	I0514 01:11:55.854958   14332 node_conditions.go:102] verifying NodePressure condition ...
	I0514 01:11:55.900459   14332 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 01:11:55.900543   14332 node_conditions.go:123] node cpu capacity is 2
	I0514 01:11:55.900543   14332 node_conditions.go:105] duration metric: took 45.582ms to run NodePressure ...
	I0514 01:11:55.900543   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:56.498857   14332 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0514 01:11:56.505938   14332 kubeadm.go:733] kubelet initialised
	I0514 01:11:56.505938   14332 kubeadm.go:734] duration metric: took 7.0804ms waiting for restarted kubelet to initialise ...
	I0514 01:11:56.505938   14332 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:11:56.516515   14332 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:54.842128     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:54.842230     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:54.842230     744 main.go:141] libmachine: Waiting for host to start...
	I0514 01:11:54.842305     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:11:57.248239     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:57.248835     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:57.248961     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:58.534074   14332 pod_ready.go:102] pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace has status "Ready":"False"
	I0514 01:12:00.537488   14332 pod_ready.go:102] pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace has status "Ready":"False"
	I0514 01:12:01.531177   14332 pod_ready.go:92] pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:01.531262   14332 pod_ready.go:81] duration metric: took 5.0144106s for pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:01.531262   14332 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:59.688055     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:59.688055     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:00.701210     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:02.883972     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:02.884160     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:02.884160     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:03.556950   14332 pod_ready.go:102] pod "etcd-pause-851700" in "kube-system" namespace has status "Ready":"False"
	I0514 01:12:06.056861   14332 pod_ready.go:102] pod "etcd-pause-851700" in "kube-system" namespace has status "Ready":"False"
	I0514 01:12:05.342786     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:12:05.343668     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:06.349348     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:08.644997     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:08.644997     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:08.644997     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:08.543040   14332 pod_ready.go:102] pod "etcd-pause-851700" in "kube-system" namespace has status "Ready":"False"
	I0514 01:12:10.052481   14332 pod_ready.go:92] pod "etcd-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.052546   14332 pod_ready.go:81] duration metric: took 8.5207117s for pod "etcd-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.052606   14332 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.062843   14332 pod_ready.go:92] pod "kube-apiserver-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.062843   14332 pod_ready.go:81] duration metric: took 10.2362ms for pod "kube-apiserver-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.062843   14332 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.071180   14332 pod_ready.go:92] pod "kube-controller-manager-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.071231   14332 pod_ready.go:81] duration metric: took 8.3882ms for pod "kube-controller-manager-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.071231   14332 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8qgfs" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.078893   14332 pod_ready.go:92] pod "kube-proxy-8qgfs" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.078947   14332 pod_ready.go:81] duration metric: took 7.7149ms for pod "kube-proxy-8qgfs" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.078947   14332 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.085021   14332 pod_ready.go:92] pod "kube-scheduler-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.085021   14332 pod_ready.go:81] duration metric: took 6.0739ms for pod "kube-scheduler-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.085021   14332 pod_ready.go:38] duration metric: took 13.5781722s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:12:10.085021   14332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0514 01:12:10.104428   14332 ops.go:34] apiserver oom_adj: -16
	I0514 01:12:10.104428   14332 kubeadm.go:591] duration metric: took 25.8234985s to restartPrimaryControlPlane
	I0514 01:12:10.104428   14332 kubeadm.go:393] duration metric: took 25.9311771s to StartCluster
	I0514 01:12:10.104553   14332 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:12:10.104627   14332 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 01:12:10.110790   14332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:12:10.112124   14332 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.111.154 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 01:12:10.112124   14332 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0514 01:12:10.115777   14332 out.go:177] * Verifying Kubernetes components...
	I0514 01:12:10.112679   14332 config.go:182] Loaded profile config "pause-851700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:10.119666   14332 out.go:177] * Enabled addons: 
	I0514 01:12:10.128850   14332 addons.go:505] duration metric: took 16.8092ms for enable addons: enabled=[]
	I0514 01:12:10.134852   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:12:10.399514   14332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:12:10.441197   14332 node_ready.go:35] waiting up to 6m0s for node "pause-851700" to be "Ready" ...
	I0514 01:12:10.447177   14332 node_ready.go:49] node "pause-851700" has status "Ready":"True"
	I0514 01:12:10.447177   14332 node_ready.go:38] duration metric: took 5.9797ms for node "pause-851700" to be "Ready" ...
	I0514 01:12:10.447177   14332 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:12:10.457165   14332 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.861614   14332 pod_ready.go:92] pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.861669   14332 pod_ready.go:81] duration metric: took 404.4774ms for pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.861669   14332 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:11.254797   14332 pod_ready.go:92] pod "etcd-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:11.254797   14332 pod_ready.go:81] duration metric: took 393.1009ms for pod "etcd-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:11.254797   14332 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:11.654281   14332 pod_ready.go:92] pod "kube-apiserver-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:11.654281   14332 pod_ready.go:81] duration metric: took 399.4576ms for pod "kube-apiserver-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:11.654281   14332 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.049435   14332 pod_ready.go:92] pod "kube-controller-manager-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:12.049482   14332 pod_ready.go:81] duration metric: took 395.1748ms for pod "kube-controller-manager-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.049482   14332 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qgfs" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.460412   14332 pod_ready.go:92] pod "kube-proxy-8qgfs" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:12.460412   14332 pod_ready.go:81] duration metric: took 410.9019ms for pod "kube-proxy-8qgfs" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.460412   14332 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.852398   14332 pod_ready.go:92] pod "kube-scheduler-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:12.852398   14332 pod_ready.go:81] duration metric: took 391.9595ms for pod "kube-scheduler-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.852398   14332 pod_ready.go:38] duration metric: took 2.4050595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:12:12.852398   14332 api_server.go:52] waiting for apiserver process to appear ...
	I0514 01:12:12.862409   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:12:12.892519   14332 api_server.go:72] duration metric: took 2.7801231s to wait for apiserver process to appear ...
	I0514 01:12:12.892587   14332 api_server.go:88] waiting for apiserver healthz status ...
	I0514 01:12:12.892651   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:12:12.906215   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 200:
	ok
	I0514 01:12:12.908487   14332 api_server.go:141] control plane version: v1.30.0
	I0514 01:12:12.908487   14332 api_server.go:131] duration metric: took 15.8351ms to wait for apiserver health ...
	I0514 01:12:12.908487   14332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 01:12:13.069296   14332 system_pods.go:59] 6 kube-system pods found
	I0514 01:12:13.069337   14332 system_pods.go:61] "coredns-7db6d8ff4d-ntqd5" [10fdf7e7-0874-4abd-911e-88f6950f220a] Running
	I0514 01:12:13.069337   14332 system_pods.go:61] "etcd-pause-851700" [8f211517-c814-49ef-ac6c-f22b10e36b62] Running
	I0514 01:12:13.069337   14332 system_pods.go:61] "kube-apiserver-pause-851700" [7bd68de3-ee66-48ce-899b-a7be9c13339c] Running
	I0514 01:12:13.069337   14332 system_pods.go:61] "kube-controller-manager-pause-851700" [1dfabfcc-5216-403e-bc07-ca5f978e5435] Running
	I0514 01:12:13.069395   14332 system_pods.go:61] "kube-proxy-8qgfs" [0214f901-7bdf-4eab-81a1-5f041f2be6c5] Running
	I0514 01:12:13.069395   14332 system_pods.go:61] "kube-scheduler-pause-851700" [e1db2a1e-d04b-45ff-9ee0-f1fcf52b420f] Running
	I0514 01:12:13.069420   14332 system_pods.go:74] duration metric: took 160.8971ms to wait for pod list to return data ...
	I0514 01:12:13.069420   14332 default_sa.go:34] waiting for default service account to be created ...
	I0514 01:12:13.260374   14332 default_sa.go:45] found service account: "default"
	I0514 01:12:13.260374   14332 default_sa.go:55] duration metric: took 190.9414ms for default service account to be created ...
	I0514 01:12:13.260374   14332 system_pods.go:116] waiting for k8s-apps to be running ...
	I0514 01:12:13.453389   14332 system_pods.go:86] 6 kube-system pods found
	I0514 01:12:13.453389   14332 system_pods.go:89] "coredns-7db6d8ff4d-ntqd5" [10fdf7e7-0874-4abd-911e-88f6950f220a] Running
	I0514 01:12:13.453389   14332 system_pods.go:89] "etcd-pause-851700" [8f211517-c814-49ef-ac6c-f22b10e36b62] Running
	I0514 01:12:13.453389   14332 system_pods.go:89] "kube-apiserver-pause-851700" [7bd68de3-ee66-48ce-899b-a7be9c13339c] Running
	I0514 01:12:13.453389   14332 system_pods.go:89] "kube-controller-manager-pause-851700" [1dfabfcc-5216-403e-bc07-ca5f978e5435] Running
	I0514 01:12:13.453389   14332 system_pods.go:89] "kube-proxy-8qgfs" [0214f901-7bdf-4eab-81a1-5f041f2be6c5] Running
	I0514 01:12:13.453389   14332 system_pods.go:89] "kube-scheduler-pause-851700" [e1db2a1e-d04b-45ff-9ee0-f1fcf52b420f] Running
	I0514 01:12:13.453389   14332 system_pods.go:126] duration metric: took 193.0024ms to wait for k8s-apps to be running ...
	I0514 01:12:13.453389   14332 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 01:12:13.467393   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 01:12:13.503917   14332 system_svc.go:56] duration metric: took 50.4869ms WaitForService to wait for kubelet
	I0514 01:12:13.503980   14332 kubeadm.go:576] duration metric: took 3.3915429s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 01:12:13.503980   14332 node_conditions.go:102] verifying NodePressure condition ...
	I0514 01:12:13.653212   14332 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 01:12:13.653212   14332 node_conditions.go:123] node cpu capacity is 2
	I0514 01:12:13.653758   14332 node_conditions.go:105] duration metric: took 149.7683ms to run NodePressure ...
	I0514 01:12:13.653758   14332 start.go:240] waiting for startup goroutines ...
	I0514 01:12:13.653819   14332 start.go:245] waiting for cluster config update ...
	I0514 01:12:13.653819   14332 start.go:254] writing updated cluster config ...
	I0514 01:12:13.669726   14332 ssh_runner.go:195] Run: rm -f paused
	I0514 01:12:13.810990   14332 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0514 01:12:13.815021   14332 out.go:177] * Done! kubectl is now configured to use "pause-851700" cluster and "default" namespace by default
	I0514 01:12:11.112581     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:12:11.112581     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:12.124000     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:14.548641     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:14.548641     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:14.548641     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:17.466878     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:12:17.466878     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:18.480050     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:21.061046     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:21.061046     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:21.061177     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:23.998675     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:23.998675     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:23.998873     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:26.321136     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:26.321693     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:26.321693     744 machine.go:94] provisionDockerMachine start ...
	I0514 01:12:26.321900     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:28.686188     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:28.686188     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:28.686188     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:31.410236     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:31.410236     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:31.417276     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:12:31.417656     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:12:31.417656     744 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 01:12:31.562759     744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0514 01:12:31.562759     744 buildroot.go:166] provisioning hostname "calico-204600"
	I0514 01:12:31.562759     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:33.840269     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:33.840322     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:33.840322     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:36.568456     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:36.568590     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:36.574154     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:12:36.574907     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:12:36.574907     744 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-204600 && echo "calico-204600" | sudo tee /etc/hostname
	I0514 01:12:36.760819     744 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-204600
	
	I0514 01:12:36.760893     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:39.063025     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:39.063171     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:39.063227     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:41.800938     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:41.801004     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:41.805838     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:12:41.806432     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:12:41.806474     744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-204600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-204600/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-204600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 01:12:41.970277     744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 01:12:41.970277     744 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 01:12:41.970277     744 buildroot.go:174] setting up certificates
	I0514 01:12:41.970277     744 provision.go:84] configureAuth start
	I0514 01:12:41.970277     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:44.265055     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:44.265055     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:44.265177     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:47.173167     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:47.173296     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:47.173296     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:49.584844     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:49.584844     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:49.585006     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:52.363398     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:52.363503     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:52.363564     744 provision.go:143] copyHostCerts
	I0514 01:12:52.363979     744 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 01:12:52.363979     744 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 01:12:52.364668     744 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 01:12:52.366069     744 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 01:12:52.366069     744 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 01:12:52.366694     744 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 01:12:52.368163     744 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 01:12:52.368163     744 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 01:12:52.368423     744 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 01:12:52.370026     744 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-204600 san=[127.0.0.1 172.23.106.124 calico-204600 localhost minikube]
	I0514 01:12:52.555598     744 provision.go:177] copyRemoteCerts
	I0514 01:12:52.563590     744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 01:12:52.563590     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:54.840874     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:54.841271     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:54.841388     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:57.601993     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:57.602065     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:57.602065     744 sshutil.go:53] new ssh client: &{IP:172.23.106.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\id_rsa Username:docker}
	I0514 01:12:57.716833     744 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1528574s)
	I0514 01:12:57.716833     744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 01:12:57.770421     744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0514 01:12:57.821289     744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0514 01:12:57.868095     744 provision.go:87] duration metric: took 15.8967535s to configureAuth
	I0514 01:12:57.868095     744 buildroot.go:189] setting minikube options for container-runtime
	I0514 01:12:57.868830     744 config.go:182] Loaded profile config "calico-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:57.868830     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:00.208633     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:00.208681     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:00.208681     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:02.935580     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:02.935580     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:02.940289     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:13:02.940810     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:13:02.940912     744 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 01:13:03.085296     744 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 01:13:03.085296     744 buildroot.go:70] root file system type: tmpfs
	I0514 01:13:03.085296     744 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 01:13:03.085855     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:05.568098     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:05.568976     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:05.569060     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:08.375888     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:08.375888     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:08.380890     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:13:08.380890     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:13:08.380890     744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 01:13:08.554746     744 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 01:13:08.554746     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:10.869421     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:10.869811     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:10.869811     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:13.555500     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:13.555584     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:13.560037     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:13:13.560037     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:13:13.560037     744 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	
	
	==> Docker <==
	May 14 01:11:49 pause-851700 dockerd[4981]: time="2024-05-14T01:11:49.740376303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 14 01:11:49 pause-851700 dockerd[4981]: time="2024-05-14T01:11:49.741201049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 14 01:11:49 pause-851700 dockerd[4981]: time="2024-05-14T01:11:49.741453963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:49 pause-851700 dockerd[4981]: time="2024-05-14T01:11:49.741948090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:54 pause-851700 cri-dockerd[5201]: time="2024-05-14T01:11:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.486291061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.486392767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.486412668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.486563377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.530253190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.530782122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.531175445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.532962552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:55 pause-851700 cri-dockerd[5201]: time="2024-05-14T01:11:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8406ba2f2f82acab267a14fc1b7ac3ba3873ccaae88c257724b85d9e493c25e/resolv.conf as [nameserver 172.23.96.1]"
	May 14 01:11:55 pause-851700 cri-dockerd[5201]: time="2024-05-14T01:11:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8131d02210f06fa96210a39cecce86ace1b24a67c7d71638d01f441564d439e1/resolv.conf as [nameserver 172.23.96.1]"
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.959613794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.960017518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.960152526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.960399541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:56 pause-851700 dockerd[4981]: time="2024-05-14T01:11:56.326529119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 14 01:11:56 pause-851700 dockerd[4981]: time="2024-05-14T01:11:56.326712131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 14 01:11:56 pause-851700 dockerd[4981]: time="2024-05-14T01:11:56.327061353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:56 pause-851700 dockerd[4981]: time="2024-05-14T01:11:56.327515882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:12:44 pause-851700 cri-dockerd[5201]: time="2024-05-14T01:12:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 14 01:12:44 pause-851700 cri-dockerd[5201]: time="2024-05-14T01:12:44Z" level=error msg="Failed to retrieve checkpoint for sandbox 0546d4d0592055cd55dd68fabffd6504ae4a879eb41b1c0170214f0d5fcdcddc: checkpoint is not found"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a1ecdc98e3b06       cbb01a7bd410d       About a minute ago   Running             coredns                   1                   8131d02210f06       coredns-7db6d8ff4d-ntqd5
	8b6f668b98e5c       a0bf559e280cf       About a minute ago   Running             kube-proxy                2                   a8406ba2f2f82       kube-proxy-8qgfs
	f0158cf67f9e9       259c8277fcbbc       About a minute ago   Running             kube-scheduler            2                   221fc404646e9       kube-scheduler-pause-851700
	66e920ff9a6f6       3861cfcd7c04c       About a minute ago   Running             etcd                      2                   d339a10b09a1d       etcd-pause-851700
	040c2ded4465d       c42f13656d0b2       About a minute ago   Running             kube-apiserver            2                   ea5b119d99b57       kube-apiserver-pause-851700
	eda66ff4e85fd       c7aad43836fa5       About a minute ago   Running             kube-controller-manager   2                   72215b2606f06       kube-controller-manager-pause-851700
	49157b1b723fe       a0bf559e280cf       About a minute ago   Created             kube-proxy                1                   798a552412b89       kube-proxy-8qgfs
	62549574b37b7       259c8277fcbbc       About a minute ago   Created             kube-scheduler            1                   18eaec56489e6       kube-scheduler-pause-851700
	6088c2f87d781       c42f13656d0b2       About a minute ago   Created             kube-apiserver            1                   f132fb594539d       kube-apiserver-pause-851700
	3aa29f1051a64       3861cfcd7c04c       About a minute ago   Exited              etcd                      1                   f388a99b7b433       etcd-pause-851700
	07a402b65f7be       c7aad43836fa5       About a minute ago   Exited              kube-controller-manager   1                   5e24fe2e11bcd       kube-controller-manager-pause-851700
	42e4b7e0c0f98       cbb01a7bd410d       8 minutes ago        Exited              coredns                   0                   d83b1ad1e1b80       coredns-7db6d8ff4d-ntqd5
	
	
	==> coredns [42e4b7e0c0f9] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[865714426]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-May-2024 01:04:52.815) (total time: 30000ms):
	Trace[865714426]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (01:05:22.815)
	Trace[865714426]: [30.0007496s] [30.0007496s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2016310029]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-May-2024 01:04:52.813) (total time: 30003ms):
	Trace[2016310029]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:05:22.815)
	Trace[2016310029]: [30.003514414s] [30.003514414s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1842595072]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-May-2024 01:04:52.813) (total time: 30004ms):
	Trace[1842595072]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:05:22.814)
	Trace[1842595072]: [30.004683995s] [30.004683995s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59119 - 9417 "HINFO IN 2341313173456037861.7315749896242332163. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04089532s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a1ecdc98e3b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41921 - 57314 "HINFO IN 2819303177314173937.66606858249195375. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.049676848s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[  +0.217619] kauditd_printk_skb: 12 callbacks suppressed
	[May14 01:05] kauditd_printk_skb: 88 callbacks suppressed
	[May14 01:08] hrtimer: interrupt took 4429302 ns
	[May14 01:11] systemd-fstab-generator[4538]: Ignoring "noauto" option for root device
	[  +0.631914] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	[  +0.304771] systemd-fstab-generator[4586]: Ignoring "noauto" option for root device
	[  +0.330597] systemd-fstab-generator[4599]: Ignoring "noauto" option for root device
	[  +5.347338] kauditd_printk_skb: 87 callbacks suppressed
	[  +8.109134] systemd-fstab-generator[5150]: Ignoring "noauto" option for root device
	[  +0.235325] systemd-fstab-generator[5161]: Ignoring "noauto" option for root device
	[  +0.227992] systemd-fstab-generator[5173]: Ignoring "noauto" option for root device
	[  +0.316600] systemd-fstab-generator[5188]: Ignoring "noauto" option for root device
	[  +0.967342] systemd-fstab-generator[5345]: Ignoring "noauto" option for root device
	[  +0.368495] kauditd_printk_skb: 140 callbacks suppressed
	[  +4.423256] systemd-fstab-generator[6188]: Ignoring "noauto" option for root device
	[  +1.337849] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.832931] kauditd_printk_skb: 30 callbacks suppressed
	[May14 01:12] kauditd_printk_skb: 19 callbacks suppressed
	[  +3.583456] systemd-fstab-generator[7151]: Ignoring "noauto" option for root device
	[ +12.246180] systemd-fstab-generator[7231]: Ignoring "noauto" option for root device
	[  +0.162952] kauditd_printk_skb: 14 callbacks suppressed
	[ +21.400915] systemd-fstab-generator[7513]: Ignoring "noauto" option for root device
	[  +0.180512] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.264812] systemd-fstab-generator[7610]: Ignoring "noauto" option for root device
	[  +0.168635] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3aa29f1051a6] <==
	{"level":"info","ts":"2024-05-14T01:11:44.850126Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"26.367094ms"}
	{"level":"info","ts":"2024-05-14T01:11:44.885272Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-05-14T01:11:44.931913Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"5c0ab8bdc2f3a27e","local-member-id":"2c597cdbed357cb1","commit-index":640}
	{"level":"info","ts":"2024-05-14T01:11:44.932095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2c597cdbed357cb1 switched to configuration voters=()"}
	{"level":"info","ts":"2024-05-14T01:11:44.932122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2c597cdbed357cb1 became follower at term 2"}
	{"level":"info","ts":"2024-05-14T01:11:44.932135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2c597cdbed357cb1 [peers: [], term: 2, commit: 640, applied: 0, lastindex: 640, lastterm: 2]"}
	{"level":"warn","ts":"2024-05-14T01:11:44.942697Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-05-14T01:11:44.971466Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":550}
	{"level":"info","ts":"2024-05-14T01:11:44.98838Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-05-14T01:11:45.002895Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2c597cdbed357cb1","timeout":"7s"}
	{"level":"info","ts":"2024-05-14T01:11:45.003313Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2c597cdbed357cb1"}
	{"level":"info","ts":"2024-05-14T01:11:45.003379Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"2c597cdbed357cb1","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-05-14T01:11:45.005761Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-14T01:11:45.0059Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-14T01:11:45.005931Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-14T01:11:45.00594Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-14T01:11:45.006282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2c597cdbed357cb1 switched to configuration voters=(3195722694615465137)"}
	{"level":"info","ts":"2024-05-14T01:11:45.006333Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5c0ab8bdc2f3a27e","local-member-id":"2c597cdbed357cb1","added-peer-id":"2c597cdbed357cb1","added-peer-peer-urls":["https://172.23.111.154:2380"]}
	{"level":"info","ts":"2024-05-14T01:11:45.006427Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5c0ab8bdc2f3a27e","local-member-id":"2c597cdbed357cb1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T01:11:45.006453Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T01:11:45.019921Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-14T01:11:45.020298Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2c597cdbed357cb1","initial-advertise-peer-urls":["https://172.23.111.154:2380"],"listen-peer-urls":["https://172.23.111.154:2380"],"advertise-client-urls":["https://172.23.111.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.111.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-14T01:11:45.020375Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-14T01:11:45.020489Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.111.154:2380"}
	{"level":"info","ts":"2024-05-14T01:11:45.020508Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.111.154:2380"}
	
	
	==> etcd [66e920ff9a6f] <==
	{"level":"info","ts":"2024-05-14T01:11:51.98058Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-14T01:11:51.982918Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.111.154:2379"}
	{"level":"info","ts":"2024-05-14T01:11:51.986866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-14T01:11:51.987059Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-14T01:11:54.22352Z","caller":"traceutil/trace.go:171","msg":"trace[228720461] linearizableReadLoop","detail":"{readStateIndex:643; appliedIndex:642; }","duration":"117.599852ms","start":"2024-05-14T01:11:54.105903Z","end":"2024-05-14T01:11:54.223503Z","steps":["trace[228720461] 'read index received'  (duration: 117.418841ms)","trace[228720461] 'applied index is now lower than readState.Index'  (duration: 180.511µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-14T01:11:54.224037Z","caller":"traceutil/trace.go:171","msg":"trace[959813969] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"123.14508ms","start":"2024-05-14T01:11:54.100878Z","end":"2024-05-14T01:11:54.224024Z","steps":["trace[959813969] 'process raft request'  (duration: 122.486741ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:54.224459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.533108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"info","ts":"2024-05-14T01:11:54.22457Z","caller":"traceutil/trace.go:171","msg":"trace[1864620139] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:551; }","duration":"118.677216ms","start":"2024-05-14T01:11:54.105883Z","end":"2024-05-14T01:11:54.22456Z","steps":["trace[1864620139] 'agreement among raft nodes before linearized reading'  (duration: 118.522707ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:54.234265Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.410578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" ","response":"range_response_count:2 size:1908"}
	{"level":"info","ts":"2024-05-14T01:11:54.235066Z","caller":"traceutil/trace.go:171","msg":"trace[969980280] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:552; }","duration":"122.243727ms","start":"2024-05-14T01:11:54.112811Z","end":"2024-05-14T01:11:54.235055Z","steps":["trace[969980280] 'agreement among raft nodes before linearized reading'  (duration: 121.295771ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-14T01:11:54.234714Z","caller":"traceutil/trace.go:171","msg":"trace[1617278336] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"128.73031ms","start":"2024-05-14T01:11:54.10597Z","end":"2024-05-14T01:11:54.234701Z","steps":["trace[1617278336] 'process raft request'  (duration: 127.961965ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:54.23475Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.843039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.23.111.154\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-05-14T01:11:54.235249Z","caller":"traceutil/trace.go:171","msg":"trace[965550647] range","detail":"{range_begin:/registry/masterleases/172.23.111.154; range_end:; response_count:1; response_revision:552; }","duration":"104.37267ms","start":"2024-05-14T01:11:54.130867Z","end":"2024-05-14T01:11:54.23524Z","steps":["trace[965550647] 'agreement among raft nodes before linearized reading'  (duration: 103.841439ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-14T01:11:57.811419Z","caller":"traceutil/trace.go:171","msg":"trace[1487128947] linearizableReadLoop","detail":"{readStateIndex:675; appliedIndex:674; }","duration":"228.045578ms","start":"2024-05-14T01:11:57.583356Z","end":"2024-05-14T01:11:57.811401Z","steps":["trace[1487128947] 'read index received'  (duration: 139.814777ms)","trace[1487128947] 'applied index is now lower than readState.Index'  (duration: 88.229901ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-14T01:11:57.811725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.354198ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ntqd5\" ","response":"range_response_count:1 size:4966"}
	{"level":"info","ts":"2024-05-14T01:11:57.811455Z","caller":"traceutil/trace.go:171","msg":"trace[1399960450] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"347.086635ms","start":"2024-05-14T01:11:57.464346Z","end":"2024-05-14T01:11:57.811432Z","steps":["trace[1399960450] 'process raft request'  (duration: 258.883936ms)","trace[1399960450] 'compare'  (duration: 87.685567ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-14T01:11:57.816274Z","caller":"traceutil/trace.go:171","msg":"trace[1194517682] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-ntqd5; range_end:; response_count:1; response_revision:566; }","duration":"232.034631ms","start":"2024-05-14T01:11:57.583327Z","end":"2024-05-14T01:11:57.815362Z","steps":["trace[1194517682] 'agreement among raft nodes before linearized reading'  (duration: 228.251291ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:57.818468Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-14T01:11:57.464329Z","time spent":"351.626024ms","remote":"127.0.0.1:49744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":616,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-851700.17cf35c60b33d78b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-851700.17cf35c60b33d78b\" value_size:544 lease:8985120462776342394 >> failure:<>"}
	{"level":"warn","ts":"2024-05-14T01:11:58.259961Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.786469ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8985120462776342400 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-851700.17cf35c6118f5b07\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-851700.17cf35c6118f5b07\" value_size:598 lease:8985120462776342394 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-05-14T01:11:58.26101Z","caller":"traceutil/trace.go:171","msg":"trace[1232228386] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"438.154247ms","start":"2024-05-14T01:11:57.822835Z","end":"2024-05-14T01:11:58.260989Z","steps":["trace[1232228386] 'process raft request'  (duration: 192.275909ms)","trace[1232228386] 'compare'  (duration: 244.392943ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-14T01:11:58.261844Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-14T01:11:57.822822Z","time spent":"438.946797ms","remote":"127.0.0.1:49744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":670,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-851700.17cf35c6118f5b07\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-851700.17cf35c6118f5b07\" value_size:598 lease:8985120462776342394 >> failure:<>"}
	{"level":"info","ts":"2024-05-14T01:11:58.264035Z","caller":"traceutil/trace.go:171","msg":"trace[2003042396] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"438.844291ms","start":"2024-05-14T01:11:57.825178Z","end":"2024-05-14T01:11:58.264022Z","steps":["trace[2003042396] 'process raft request'  (duration: 434.877239ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:58.264617Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-14T01:11:57.825163Z","time spent":"439.097507ms","remote":"127.0.0.1:49850","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5066,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ntqd5\" mod_revision:557 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ntqd5\" value_size:5007 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ntqd5\" > >"}
	{"level":"info","ts":"2024-05-14T01:11:58.719099Z","caller":"traceutil/trace.go:171","msg":"trace[1912588650] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"356.724687ms","start":"2024-05-14T01:11:58.362356Z","end":"2024-05-14T01:11:58.719081Z","steps":["trace[1912588650] 'process raft request'  (duration: 356.320762ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:58.719483Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-14T01:11:58.362339Z","time spent":"357.058608ms","remote":"127.0.0.1:49744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-851700.17cf35c6118f80ec\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-851700.17cf35c6118f80ec\" value_size:592 lease:8985120462776342394 >> failure:<>"}
	
	
	==> kernel <==
	 01:13:37 up 11 min,  0 users,  load average: 0.18, 0.37, 0.25
	Linux pause-851700 5.10.207 #1 SMP Thu May 9 02:07:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [040c2ded4465] <==
	I0514 01:11:54.074702       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 01:11:54.074741       1 policy_source.go:224] refreshing policies
	I0514 01:11:54.096975       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 01:11:54.124370       1 shared_informer.go:320] Caches are synced for configmaps
	I0514 01:11:54.126150       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0514 01:11:54.126569       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0514 01:11:54.127363       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 01:11:54.128210       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0514 01:11:54.134062       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0514 01:11:54.135818       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 01:11:54.142671       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0514 01:11:54.142951       1 aggregator.go:165] initial CRD sync complete...
	I0514 01:11:54.143242       1 autoregister_controller.go:141] Starting autoregister controller
	I0514 01:11:54.143266       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0514 01:11:54.143430       1 cache.go:39] Caches are synced for autoregister controller
	I0514 01:11:54.168057       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0514 01:11:54.258415       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0514 01:11:54.975202       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0514 01:11:56.467054       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0514 01:11:56.522045       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0514 01:11:56.633781       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0514 01:11:56.714218       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 01:11:56.727779       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0514 01:12:06.856401       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 01:12:06.939490       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [6088c2f87d78] <==
	
	
	==> kube-controller-manager [07a402b65f7b] <==
	
	
	==> kube-controller-manager [eda66ff4e85f] <==
	I0514 01:12:06.906288       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0514 01:12:06.911190       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 01:12:06.915539       1 shared_informer.go:320] Caches are synced for namespace
	I0514 01:12:06.918544       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 01:12:06.921611       1 shared_informer.go:320] Caches are synced for node
	I0514 01:12:06.921840       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 01:12:06.922060       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 01:12:06.922270       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 01:12:06.922419       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 01:12:06.927178       1 shared_informer.go:320] Caches are synced for TTL
	I0514 01:12:06.927263       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 01:12:06.927275       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 01:12:06.930206       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 01:12:06.942424       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 01:12:07.035703       1 shared_informer.go:320] Caches are synced for disruption
	I0514 01:12:07.059785       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 01:12:07.060894       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 01:12:07.105711       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 01:12:07.141262       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 01:12:07.156749       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 01:12:07.157006       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 01:12:07.157930       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 01:12:07.540788       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 01:12:07.569005       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 01:12:07.569087       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [49157b1b723f] <==
	
	
	==> kube-proxy [8b6f668b98e5] <==
	I0514 01:11:56.348108       1 server_linux.go:69] "Using iptables proxy"
	I0514 01:11:56.376262       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.111.154"]
	I0514 01:11:56.446080       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 01:11:56.446125       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 01:11:56.446145       1 server_linux.go:165] "Using iptables Proxier"
	I0514 01:11:56.452533       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 01:11:56.453003       1 server.go:872] "Version info" version="v1.30.0"
	I0514 01:11:56.453315       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 01:11:56.455072       1 config.go:192] "Starting service config controller"
	I0514 01:11:56.455233       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 01:11:56.455390       1 config.go:101] "Starting endpoint slice config controller"
	I0514 01:11:56.456877       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 01:11:56.456349       1 config.go:319] "Starting node config controller"
	I0514 01:11:56.462422       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 01:11:56.556800       1 shared_informer.go:320] Caches are synced for service config
	I0514 01:11:56.563145       1 shared_informer.go:320] Caches are synced for node config
	I0514 01:11:56.563246       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [62549574b37b] <==
	
	
	==> kube-scheduler [f0158cf67f9e] <==
	I0514 01:11:51.916051       1 serving.go:380] Generated self-signed cert in-memory
	W0514 01:11:54.027065       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0514 01:11:54.027472       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0514 01:11:54.027697       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0514 01:11:54.027819       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 01:11:54.108344       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 01:11:54.108660       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 01:11:54.114506       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 01:11:54.114549       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 01:11:54.118103       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 01:11:54.118162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 01:11:54.216025       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.150526    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/feb1bef467bbb676eeced6d36d10658b-etcd-data\") pod \"etcd-pause-851700\" (UID: \"feb1bef467bbb676eeced6d36d10658b\") " pod="kube-system/etcd-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.150638    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee963cc10a9538d2029afc43406ef6e0-k8s-certs\") pod \"kube-apiserver-pause-851700\" (UID: \"ee963cc10a9538d2029afc43406ef6e0\") " pod="kube-system/kube-apiserver-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.150806    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee963cc10a9538d2029afc43406ef6e0-usr-share-ca-certificates\") pod \"kube-apiserver-pause-851700\" (UID: \"ee963cc10a9538d2029afc43406ef6e0\") " pod="kube-system/kube-apiserver-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.151043    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3eeefc2a42d7fe4e61d2d6a0aba0d1e-kubeconfig\") pod \"kube-controller-manager-pause-851700\" (UID: \"c3eeefc2a42d7fe4e61d2d6a0aba0d1e\") " pod="kube-system/kube-controller-manager-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.181071    7520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e24fe2e11bcd28f48b8ba98586ba8383b12ab8d148660f799c2a70771b0fa9d"
	May 14 01:12:45 pause-851700 kubelet[7520]: E0514 01:12:45.201274    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-851700\" already exists" pod="kube-system/etcd-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.209066    7520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d83b1ad1e1b80c0a2d70e0f025e883f7a1bb91f92ff2ea3f0bff8e555ca9ef90"
	May 14 01:12:45 pause-851700 kubelet[7520]: E0514 01:12:45.223771    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-851700\" already exists" pod="kube-system/kube-controller-manager-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: E0514 01:12:45.223826    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-851700\" already exists" pod="kube-system/kube-apiserver-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.228658    7520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18eaec56489e6b1d47561c044caa199305444897a9985c4fd6d20b9608c84c4d"
	May 14 01:12:45 pause-851700 kubelet[7520]: E0514 01:12:45.245519    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-851700\" already exists" pod="kube-system/kube-scheduler-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.255326    7520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798a552412b896cd3e2b5c1492dfab1a53a4e23bf19b970c9934a9a962bc6dea"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.742920    7520 apiserver.go:52] "Watching apiserver"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.749089    7520 topology_manager.go:215] "Topology Admit Handler" podUID="10fdf7e7-0874-4abd-911e-88f6950f220a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ntqd5"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.749539    7520 topology_manager.go:215] "Topology Admit Handler" podUID="0214f901-7bdf-4eab-81a1-5f041f2be6c5" podNamespace="kube-system" podName="kube-proxy-8qgfs"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.761170    7520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.856728    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0214f901-7bdf-4eab-81a1-5f041f2be6c5-xtables-lock\") pod \"kube-proxy-8qgfs\" (UID: \"0214f901-7bdf-4eab-81a1-5f041f2be6c5\") " pod="kube-system/kube-proxy-8qgfs"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.856825    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0214f901-7bdf-4eab-81a1-5f041f2be6c5-lib-modules\") pod \"kube-proxy-8qgfs\" (UID: \"0214f901-7bdf-4eab-81a1-5f041f2be6c5\") " pod="kube-system/kube-proxy-8qgfs"
	May 14 01:12:46 pause-851700 kubelet[7520]: E0514 01:12:46.301112    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-851700\" already exists" pod="kube-system/kube-apiserver-pause-851700"
	May 14 01:12:46 pause-851700 kubelet[7520]: E0514 01:12:46.302072    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-851700\" already exists" pod="kube-system/kube-controller-manager-pause-851700"
	May 14 01:12:46 pause-851700 kubelet[7520]: E0514 01:12:46.303109    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-851700\" already exists" pod="kube-system/etcd-pause-851700"
	May 14 01:12:46 pause-851700 kubelet[7520]: E0514 01:12:46.303663    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-851700\" already exists" pod="kube-system/kube-scheduler-pause-851700"
	May 14 01:12:52 pause-851700 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	May 14 01:12:52 pause-851700 systemd[1]: kubelet.service: Deactivated successfully.
	May 14 01:12:52 pause-851700 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 01:13:18.196634    7828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-851700 -n pause-851700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-851700 -n pause-851700: exit status 2 (13.136415s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 01:13:38.464414    1692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-851700" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-851700 -n pause-851700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-851700 -n pause-851700: exit status 2 (13.1644344s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 01:13:51.577600    5124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/DeletePaused FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/DeletePaused]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-851700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-851700 logs -n 25: (18.9259526s)
helpers_test.go:252: TestPause/serial/DeletePaused logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-204600 sudo cat                           | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | /etc/nsswitch.conf                                   |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | cat docker --no-pager                                |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo cat                           | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | /etc/hosts                                           |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo cat                              | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | /etc/docker/daemon.json                              |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo cat                           | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | /etc/resolv.conf                                     |                |                   |         |                     |                     |
	| unpause | -p pause-851700                                      | pause-851700   | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | --alsologtostderr -v=5                               |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo docker                           | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | system info                                          |                |                   |         |                     |                     |
	| pause   | -p pause-851700                                      | pause-851700   | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | --alsologtostderr -v=5                               |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo crictl                        | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:12 UTC |
	|         | pods                                                 |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:13 UTC |
	|         | status cri-docker --all --full                       |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| delete  | -p pause-851700                                      | pause-851700   | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC |                     |
	|         | --alsologtostderr -v=5                               |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo crictl                        | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:12 UTC | 14 May 24 01:13 UTC |
	|         | ps --all                                             |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	|         | cat cri-docker --no-pager                            |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo find                          | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo cat                              | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo ip a s                        | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	| ssh     | -p auto-204600 sudo cat                              | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo ip r s                        | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	| ssh     | -p auto-204600 sudo                                  | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	|         | cri-dockerd --version                                |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo                               | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	|         | iptables-save                                        |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC |                     |
	|         | status containerd --all --full                       |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo                               | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:13 UTC |
	|         | iptables -t nat -L -n -v                             |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo systemctl                        | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:13 UTC | 14 May 24 01:14 UTC |
	|         | cat containerd --no-pager                            |                |                   |         |                     |                     |
	| ssh     | -p kindnet-204600 sudo                               | kindnet-204600 | minikube5\jenkins | v1.33.1 | 14 May 24 01:14 UTC |                     |
	|         | systemctl status kubelet --all                       |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p auto-204600 sudo cat                              | auto-204600    | minikube5\jenkins | v1.33.1 | 14 May 24 01:14 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                |                   |         |                     |                     |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/14 01:07:49
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0514 01:07:49.618496     744 out.go:291] Setting OutFile to fd 1924 ...
	I0514 01:07:49.618919     744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 01:07:49.618919     744 out.go:304] Setting ErrFile to fd 1928...
	I0514 01:07:49.618919     744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 01:07:49.639652     744 out.go:298] Setting JSON to false
	I0514 01:07:49.640994     744 start.go:129] hostinfo: {"hostname":"minikube5","uptime":10432,"bootTime":1715638436,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0514 01:07:49.642544     744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0514 01:07:49.648261     744 out.go:177] * [calico-204600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0514 01:07:49.654614     744 notify.go:220] Checking for updates...
	I0514 01:07:49.657042     744 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 01:07:49.658721     744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0514 01:07:49.661709     744 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0514 01:07:49.664771     744 out.go:177]   - MINIKUBE_LOCATION=18872
	I0514 01:07:49.667354     744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0514 01:07:49.645850    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 01:07:49.645928    8788 machine.go:97] duration metric: took 43.8982762s to provisionDockerMachine
	I0514 01:07:49.645985    8788 client.go:171] duration metric: took 1m51.2796852s to LocalClient.Create
	I0514 01:07:49.645985    8788 start.go:167] duration metric: took 1m51.2799693s to libmachine.API.Create "auto-204600"
	I0514 01:07:49.645985    8788 start.go:293] postStartSetup for "auto-204600" (driver="hyperv")
	I0514 01:07:49.646056    8788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 01:07:49.655875    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 01:07:49.655875    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:07:51.706080    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:07:51.706080    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:07:51.706080    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:07:49.671009     744 config.go:182] Loaded profile config "auto-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:07:49.671249     744 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:07:49.671835     744 config.go:182] Loaded profile config "kindnet-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:07:49.671835     744 config.go:182] Loaded profile config "pause-851700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:07:49.671835     744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0514 01:07:54.571385     744 out.go:177] * Using the hyperv driver based on user configuration
	I0514 01:07:54.574935     744 start.go:297] selected driver: hyperv
	I0514 01:07:54.574935     744 start.go:901] validating driver "hyperv" against <nil>
	I0514 01:07:54.574935     744 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0514 01:07:54.618525     744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0514 01:07:54.620974     744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 01:07:54.622571     744 cni.go:84] Creating CNI manager for "calico"
	I0514 01:07:54.622571     744 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0514 01:07:54.622669     744 start.go:340] cluster config:
	{Name:calico-204600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-204600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 01:07:54.622669     744 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0514 01:07:54.626867     744 out.go:177] * Starting "calico-204600" primary control-plane node in "calico-204600" cluster
	I0514 01:07:54.034463    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:07:54.045614    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:07:54.046027    8788 sshutil.go:53] new ssh client: &{IP:172.23.105.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\auto-204600\id_rsa Username:docker}
	I0514 01:07:54.150047    8788 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4937853s)
	I0514 01:07:54.159066    8788 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 01:07:54.166076    8788 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 01:07:54.166076    8788 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 01:07:54.166571    8788 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 01:07:54.167546    8788 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 01:07:54.175823    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 01:07:54.195244    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 01:07:54.233081    8788 start.go:296] duration metric: took 4.5867968s for postStartSetup
	I0514 01:07:54.240831    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:07:56.134653    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:07:56.134653    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:07:56.144599    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:07:54.629186     744 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 01:07:54.629351     744 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0514 01:07:54.629351     744 cache.go:56] Caching tarball of preloaded images
	I0514 01:07:54.629583     744 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0514 01:07:54.629827     744 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0514 01:07:54.630096     744 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\calico-204600\config.json ...
	I0514 01:07:54.630399     744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\calico-204600\config.json: {Name:mk9b077adce043a6c2bfbde82ee25c30e0afb8f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:07:54.633507     744 start.go:360] acquireMachinesLock for calico-204600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0514 01:07:58.371872    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:07:58.371872    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:07:58.382144    8788 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\config.json ...
	I0514 01:07:58.384623    8788 start.go:128] duration metric: took 2m0.0218819s to createHost
	I0514 01:07:58.384723    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:08:00.202313    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:00.202313    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:00.202313    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:02.487710    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:08:02.497653    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:02.501704    8788 main.go:141] libmachine: Using SSH client type: native
	I0514 01:08:02.502097    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.105.126 22 <nil> <nil>}
	I0514 01:08:02.502097    8788 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 01:08:02.638388    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715648882.877798917
	
	I0514 01:08:02.638489    8788 fix.go:216] guest clock: 1715648882.877798917
	I0514 01:08:02.638489    8788 fix.go:229] Guest: 2024-05-14 01:08:02.877798917 +0000 UTC Remote: 2024-05-14 01:07:58.3846721 +0000 UTC m=+391.718110301 (delta=4.493126817s)
	I0514 01:08:02.638573    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:08:04.527319    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:04.527319    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:04.527629    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:06.735640    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:08:06.735640    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:06.749065    8788 main.go:141] libmachine: Using SSH client type: native
	I0514 01:08:06.749365    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.105.126 22 <nil> <nil>}
	I0514 01:08:06.749365    8788 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715648882
	I0514 01:08:06.910963    7260 start.go:364] duration metric: took 4m31.8000076s to acquireMachinesLock for "kindnet-204600"
	I0514 01:08:06.911583    7260 start.go:93] Provisioning new machine with config: &{Name:kindnet-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-204600 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 01:08:06.911884    7260 start.go:125] createHost starting for "" (driver="hyperv")
	I0514 01:08:06.915285    7260 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0514 01:08:06.915980    7260 start.go:159] libmachine.API.Create for "kindnet-204600" (driver="hyperv")
	I0514 01:08:06.915980    7260 client.go:168] LocalClient.Create starting
	I0514 01:08:06.916682    7260 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0514 01:08:06.917039    7260 main.go:141] libmachine: Decoding PEM data...
	I0514 01:08:06.917209    7260 main.go:141] libmachine: Parsing certificate...
	I0514 01:08:06.917386    7260 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0514 01:08:06.917730    7260 main.go:141] libmachine: Decoding PEM data...
	I0514 01:08:06.917730    7260 main.go:141] libmachine: Parsing certificate...
	I0514 01:08:06.917922    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0514 01:08:08.628285    7260 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0514 01:08:08.628285    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:08.637447    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0514 01:08:06.905877    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 01:08:02 UTC 2024
	
	I0514 01:08:06.905877    8788 fix.go:236] clock set: Tue May 14 01:08:02 UTC 2024
	 (err=<nil>)
	I0514 01:08:06.905877    8788 start.go:83] releasing machines lock for "auto-204600", held for 2m8.5431984s
	I0514 01:08:06.905877    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:08:08.865460    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:08.865460    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:08.865460    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:11.168159    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:08:11.178846    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:11.181882    8788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 01:08:11.182042    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:08:11.190743    8788 ssh_runner.go:195] Run: cat /version.json
	I0514 01:08:11.190743    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:08:10.181233    7260 main.go:141] libmachine: [stdout =====>] : False
	
	I0514 01:08:10.181233    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:10.188665    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0514 01:08:11.578248    7260 main.go:141] libmachine: [stdout =====>] : True
	
	I0514 01:08:11.587420    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:11.587420    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0514 01:08:15.004684    7260 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0514 01:08:15.011759    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:15.013283    7260 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0514 01:08:13.205346    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:13.205346    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:13.205346    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:13.217689    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:13.217689    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:13.217689    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:15.589619    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:08:15.589668    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:15.589997    8788 sshutil.go:53] new ssh client: &{IP:172.23.105.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\auto-204600\id_rsa Username:docker}
	I0514 01:08:15.620870    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:08:15.620870    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:15.621424    8788 sshutil.go:53] new ssh client: &{IP:172.23.105.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\auto-204600\id_rsa Username:docker}
	I0514 01:08:15.701630    8788 ssh_runner.go:235] Completed: cat /version.json: (4.5105912s)
	I0514 01:08:15.710729    8788 ssh_runner.go:195] Run: systemctl --version
	I0514 01:08:15.819151    8788 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6369652s)
	I0514 01:08:15.829153    8788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0514 01:08:15.837359    8788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 01:08:15.845540    8788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 01:08:15.865712    8788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 01:08:15.865712    8788 start.go:494] detecting cgroup driver to use...
	I0514 01:08:15.865712    8788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:08:15.912444    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 01:08:15.946560    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 01:08:15.963872    8788 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 01:08:15.976047    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 01:08:16.004158    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:08:16.031757    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 01:08:16.061787    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:08:16.091479    8788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 01:08:16.121479    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 01:08:16.148271    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 01:08:16.175581    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 01:08:16.210520    8788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 01:08:16.236467    8788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 01:08:16.265116    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:16.469097    8788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 01:08:16.496045    8788 start.go:494] detecting cgroup driver to use...
	I0514 01:08:16.508051    8788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 01:08:16.541208    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:08:16.571657    8788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 01:08:16.608680    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:08:16.637300    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:08:16.668698    8788 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 01:08:15.347641    7260 main.go:141] libmachine: Creating SSH key...
	I0514 01:08:15.606054    7260 main.go:141] libmachine: Creating VM...
	I0514 01:08:15.606054    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0514 01:08:18.333075    7260 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0514 01:08:18.343960    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:18.344046    7260 main.go:141] libmachine: Using switch "Default Switch"
	I0514 01:08:18.344046    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0514 01:08:19.855049    7260 main.go:141] libmachine: [stdout =====>] : True
	
	I0514 01:08:19.861088    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:19.861088    7260 main.go:141] libmachine: Creating VHD
	I0514 01:08:19.861179    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0514 01:08:16.891651    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:08:16.915168    8788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:08:16.964137    8788 ssh_runner.go:195] Run: which cri-dockerd
	I0514 01:08:16.981372    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 01:08:16.999131    8788 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 01:08:17.038646    8788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 01:08:17.217684    8788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 01:08:17.401521    8788 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 01:08:17.406479    8788 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 01:08:17.445894    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:17.627472    8788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:08:20.204247    8788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5766063s)
	I0514 01:08:20.218253    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 01:08:20.249738    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:08:20.281146    8788 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 01:08:20.484482    8788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 01:08:20.657538    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:20.833439    8788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 01:08:20.870925    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:08:20.903187    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:21.067025    8788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 01:08:21.162477    8788 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 01:08:21.170751    8788 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 01:08:21.178554    8788 start.go:562] Will wait 60s for crictl version
	I0514 01:08:21.187763    8788 ssh_runner.go:195] Run: which crictl
	I0514 01:08:21.202873    8788 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 01:08:21.250212    8788 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 01:08:21.257222    8788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:08:21.291154    8788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:08:21.329000    8788 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 01:08:21.329000    8788 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 01:08:21.334622    8788 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 01:08:21.334622    8788 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 01:08:21.335141    8788 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 01:08:21.335141    8788 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 01:08:21.339161    8788 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 01:08:21.339161    8788 ip.go:210] interface addr: 172.23.96.1/20
	I0514 01:08:21.353281    8788 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 01:08:21.361026    8788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 01:08:21.386117    8788 kubeadm.go:877] updating cluster {Name:auto-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-204600 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.105.126 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0514 01:08:21.386371    8788 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 01:08:21.393236    8788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:08:21.411868    8788 docker.go:685] Got preloaded images: 
	I0514 01:08:21.411868    8788 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0514 01:08:21.420265    8788 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0514 01:08:21.444324    8788 ssh_runner.go:195] Run: which lz4
	I0514 01:08:21.458487    8788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0514 01:08:21.461726    8788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0514 01:08:21.466299    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0514 01:08:23.611767    7260 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 25982223-F9E9-4063-867D-C430D140FBC7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0514 01:08:23.611866    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:23.612088    7260 main.go:141] libmachine: Writing magic tar header
	I0514 01:08:23.612209    7260 main.go:141] libmachine: Writing SSH key tar header
	I0514 01:08:23.619962    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0514 01:08:23.566622    8788 docker.go:649] duration metric: took 2.11557s to copy over tarball
	I0514 01:08:23.575437    8788 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0514 01:08:26.592355    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:26.594234    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:26.594234    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\disk.vhd' -SizeBytes 20000MB
	I0514 01:08:29.134897    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:29.134897    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:29.134897    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kindnet-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0514 01:08:34.112411    7260 main.go:141] libmachine: [stdout =====>] : 
	Name           State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----           ----- ----------- ----------------- ------   ------             -------
	kindnet-204600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0514 01:08:34.122356    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:34.122356    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kindnet-204600 -DynamicMemoryEnabled $false
	I0514 01:08:32.263814    8788 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6878054s)
	I0514 01:08:32.263954    8788 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0514 01:08:32.323225    8788 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0514 01:08:32.342135    8788 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0514 01:08:32.382630    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:32.554409    8788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:08:36.671839    8788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.1171596s)
	I0514 01:08:36.679580    8788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:08:36.701458    8788 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0514 01:08:36.701512    8788 cache_images.go:84] Images are preloaded, skipping loading
	I0514 01:08:36.701512    8788 kubeadm.go:928] updating node { 172.23.105.126 8443 v1.30.0 docker true true} ...
	I0514 01:08:36.701731    8788 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-204600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.105.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:auto-204600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0514 01:08:36.708866    8788 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0514 01:08:36.739696    8788 cni.go:84] Creating CNI manager for ""
	I0514 01:08:36.739790    8788 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0514 01:08:36.739790    8788 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0514 01:08:36.739883    8788 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.105.126 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-204600 NodeName:auto-204600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.105.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.105.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0514 01:08:36.740116    8788 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.105.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "auto-204600"
	  kubeletExtraArgs:
	    node-ip: 172.23.105.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.105.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0514 01:08:36.748607    8788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 01:08:36.766534    8788 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 01:08:36.774632    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0514 01:08:36.383264    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:36.383339    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:36.383339    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kindnet-204600 -Count 2
	I0514 01:08:38.359132    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:38.359132    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:38.365301    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kindnet-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\boot2docker.iso'
	I0514 01:08:36.798822    8788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0514 01:08:36.828397    8788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 01:08:36.859035    8788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0514 01:08:36.896074    8788 ssh_runner.go:195] Run: grep 172.23.105.126	control-plane.minikube.internal$ /etc/hosts
	I0514 01:08:36.902460    8788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.105.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 01:08:36.935223    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:08:37.110923    8788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:08:37.137245    8788 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600 for IP: 172.23.105.126
	I0514 01:08:37.137361    8788 certs.go:194] generating shared ca certs ...
	I0514 01:08:37.137416    8788 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:37.137667    8788 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 01:08:37.138372    8788 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 01:08:37.138486    8788 certs.go:256] generating profile certs ...
	I0514 01:08:37.139052    8788 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.key
	I0514 01:08:37.139157    8788 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.crt with IP's: []
	I0514 01:08:37.924049    8788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.crt ...
	I0514 01:08:37.924049    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.crt: {Name:mk9ef5d9715996082b511c57d50d77171fe15bed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:37.925469    8788 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.key ...
	I0514 01:08:37.925469    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\client.key: {Name:mk9a7abc7b9c802b982e8bcc449e03d42ee8f776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:37.926467    8788 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key.656d5658
	I0514 01:08:37.926467    8788 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt.656d5658 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.105.126]
	I0514 01:08:38.121280    8788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt.656d5658 ...
	I0514 01:08:38.121280    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt.656d5658: {Name:mkad59b02e5ab02952d566053a90503e0d1fceb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:38.127775    8788 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key.656d5658 ...
	I0514 01:08:38.127775    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key.656d5658: {Name:mkc7dde2a9da89392ad4bc1cf9f8482373a0b003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:38.128675    8788 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt.656d5658 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt
	I0514 01:08:38.139896    8788 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key.656d5658 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key
	I0514 01:08:38.140690    8788 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.key
	I0514 01:08:38.140690    8788 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.crt with IP's: []
	I0514 01:08:38.554131    8788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.crt ...
	I0514 01:08:38.554131    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.crt: {Name:mkd35f616dea7103668518ae7470f3b9a667195f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:38.558853    8788 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.key ...
	I0514 01:08:38.558853    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.key: {Name:mk68c5c6e7c47ca76eebda32f86e1aedfe9ed236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:08:38.564486    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 01:08:38.570776    8788 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 01:08:38.570776    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 01:08:38.571095    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 01:08:38.571291    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 01:08:38.571430    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 01:08:38.571621    8788 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 01:08:38.571908    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 01:08:38.617461    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 01:08:38.654280    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 01:08:38.698922    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 01:08:38.740951    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0514 01:08:38.793866    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0514 01:08:38.836904    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0514 01:08:38.882147    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-204600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0514 01:08:38.928818    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 01:08:38.968961    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 01:08:39.011875    8788 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 01:08:39.050865    8788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0514 01:08:39.087667    8788 ssh_runner.go:195] Run: openssl version
	I0514 01:08:39.105178    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 01:08:39.132526    8788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 01:08:39.141172    8788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 01:08:39.149689    8788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 01:08:39.166333    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 01:08:39.190714    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 01:08:39.217495    8788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 01:08:39.223949    8788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 01:08:39.232351    8788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 01:08:39.249253    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 01:08:39.276473    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 01:08:39.306414    8788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:08:39.315192    8788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:08:39.328716    8788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:08:39.352838    8788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 01:08:39.387566    8788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 01:08:39.397506    8788 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 01:08:39.397506    8788 kubeadm.go:391] StartCluster: {Name:auto-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-204600 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.105.126 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 01:08:39.407768    8788 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 01:08:39.443378    8788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0514 01:08:39.471099    8788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0514 01:08:39.497575    8788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0514 01:08:39.513628    8788 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0514 01:08:39.513628    8788 kubeadm.go:156] found existing configuration files:
	
	I0514 01:08:39.522742    8788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0514 01:08:39.538684    8788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0514 01:08:39.551320    8788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0514 01:08:39.575552    8788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0514 01:08:39.592185    8788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0514 01:08:39.602308    8788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0514 01:08:39.626150    8788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0514 01:08:39.627706    8788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0514 01:08:39.651039    8788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0514 01:08:39.678769    8788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0514 01:08:39.694320    8788 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0514 01:08:39.705138    8788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0514 01:08:39.718508    8788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0514 01:08:39.927579    8788 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0514 01:08:40.736240    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:40.736240    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:40.736240    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kindnet-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\disk.vhd'
	I0514 01:08:43.076921    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:43.076921    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:43.076921    7260 main.go:141] libmachine: Starting VM...
	I0514 01:08:43.077120    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kindnet-204600
	I0514 01:08:45.993214    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:45.993214    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:45.993214    7260 main.go:141] libmachine: Waiting for host to start...
	I0514 01:08:45.993272    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:08:48.011294    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:48.011294    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:48.016627    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:53.170813    8788 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0514 01:08:53.170935    8788 kubeadm.go:309] [preflight] Running pre-flight checks
	I0514 01:08:53.171202    8788 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0514 01:08:53.171627    8788 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0514 01:08:53.172034    8788 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0514 01:08:53.172185    8788 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0514 01:08:53.174566    8788 out.go:204]   - Generating certificates and keys ...
	I0514 01:08:53.174858    8788 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0514 01:08:53.174975    8788 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0514 01:08:53.175198    8788 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0514 01:08:53.175311    8788 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0514 01:08:53.175425    8788 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0514 01:08:53.175683    8788 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0514 01:08:53.175873    8788 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0514 01:08:53.176321    8788 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [auto-204600 localhost] and IPs [172.23.105.126 127.0.0.1 ::1]
	I0514 01:08:53.176513    8788 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0514 01:08:53.177000    8788 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [auto-204600 localhost] and IPs [172.23.105.126 127.0.0.1 ::1]
	I0514 01:08:53.177360    8788 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0514 01:08:53.177583    8788 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0514 01:08:53.177771    8788 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0514 01:08:53.177961    8788 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0514 01:08:53.177961    8788 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0514 01:08:53.177961    8788 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0514 01:08:53.177961    8788 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0514 01:08:53.178507    8788 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0514 01:08:53.178812    8788 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0514 01:08:53.178986    8788 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0514 01:08:53.179102    8788 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0514 01:08:53.182052    8788 out.go:204]   - Booting up control plane ...
	I0514 01:08:53.182758    8788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0514 01:08:53.182758    8788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0514 01:08:53.182758    8788 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0514 01:08:53.183477    8788 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0514 01:08:53.183477    8788 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0514 01:08:53.183477    8788 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0514 01:08:53.183477    8788 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0514 01:08:53.184172    8788 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0514 01:08:53.184331    8788 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001652467s
	I0514 01:08:53.184605    8788 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0514 01:08:53.184811    8788 kubeadm.go:309] [api-check] The API server is healthy after 7.002502319s
	I0514 01:08:53.185078    8788 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0514 01:08:53.185735    8788 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0514 01:08:53.185973    8788 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0514 01:08:53.186622    8788 kubeadm.go:309] [mark-control-plane] Marking the node auto-204600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0514 01:08:53.186742    8788 kubeadm.go:309] [bootstrap-token] Using token: t479qx.5zv0wf6iyoa52qxl
	I0514 01:08:53.189906    8788 out.go:204]   - Configuring RBAC rules ...
	I0514 01:08:53.190083    8788 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0514 01:08:53.190369    8788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0514 01:08:53.190639    8788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0514 01:08:53.190639    8788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0514 01:08:53.191286    8788 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0514 01:08:53.191534    8788 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0514 01:08:53.191771    8788 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0514 01:08:53.191835    8788 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0514 01:08:53.191956    8788 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0514 01:08:53.192015    8788 kubeadm.go:309] 
	I0514 01:08:53.192130    8788 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0514 01:08:53.192130    8788 kubeadm.go:309] 
	I0514 01:08:53.192424    8788 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0514 01:08:53.192476    8788 kubeadm.go:309] 
	I0514 01:08:53.192627    8788 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0514 01:08:53.192746    8788 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0514 01:08:53.192866    8788 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0514 01:08:53.192866    8788 kubeadm.go:309] 
	I0514 01:08:53.193050    8788 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0514 01:08:53.193105    8788 kubeadm.go:309] 
	I0514 01:08:53.193210    8788 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0514 01:08:53.193210    8788 kubeadm.go:309] 
	I0514 01:08:53.193210    8788 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0514 01:08:53.193210    8788 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0514 01:08:53.193815    8788 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0514 01:08:53.193869    8788 kubeadm.go:309] 
	I0514 01:08:53.194186    8788 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0514 01:08:53.194223    8788 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0514 01:08:53.194223    8788 kubeadm.go:309] 
	I0514 01:08:53.194223    8788 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token t479qx.5zv0wf6iyoa52qxl \
	I0514 01:08:53.194223    8788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 \
	I0514 01:08:53.194826    8788 kubeadm.go:309] 	--control-plane 
	I0514 01:08:53.194826    8788 kubeadm.go:309] 
	I0514 01:08:53.195017    8788 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0514 01:08:53.195078    8788 kubeadm.go:309] 
	I0514 01:08:53.195323    8788 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token t479qx.5zv0wf6iyoa52qxl \
	I0514 01:08:53.195719    8788 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0514 01:08:53.195810    8788 cni.go:84] Creating CNI manager for ""
	I0514 01:08:53.195810    8788 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0514 01:08:53.199008    8788 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0514 01:08:50.253085    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:50.254766    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:51.267891    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:08:53.254397    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:53.254397    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:53.254844    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:53.212825    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0514 01:08:53.232601    8788 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0514 01:08:53.274849    8788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0514 01:08:53.284999    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:53.284999    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-204600 minikube.k8s.io/updated_at=2024_05_14T01_08_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=auto-204600 minikube.k8s.io/primary=true
	I0514 01:08:53.293072    8788 ops.go:34] apiserver oom_adj: -16
	I0514 01:08:53.457800    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:53.968848    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:54.461491    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:54.961365    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:55.472619    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:55.964402    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:56.461053    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:55.506107    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:08:55.506147    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:56.508389    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:08:58.439526    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:08:58.439758    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:08:58.439794    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:08:56.962127    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:57.458836    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:57.971407    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:58.478671    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:58.967812    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:59.458163    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:08:59.958874    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:00.456624    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:00.959976    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:01.470847    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:00.653440    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:09:00.653440    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:01.671620    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:03.624091    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:03.633990    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:03.633990    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:01.960988    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:02.472805    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:02.965929    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:03.459702    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:03.970452    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:04.458987    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:04.969068    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:05.466448    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:05.975441    8788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:09:06.073448    8788 kubeadm.go:1107] duration metric: took 12.7976299s to wait for elevateKubeSystemPrivileges
	W0514 01:09:06.073570    8788 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0514 01:09:06.073631    8788 kubeadm.go:393] duration metric: took 26.6743596s to StartCluster
	I0514 01:09:06.073690    8788 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:09:06.073811    8788 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 01:09:06.075729    8788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:09:06.076730    8788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0514 01:09:06.076843    8788 start.go:234] Will wait 15m0s for node &{Name: IP:172.23.105.126 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 01:09:06.076843    8788 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0514 01:09:06.076954    8788 addons.go:69] Setting storage-provisioner=true in profile "auto-204600"
	I0514 01:09:06.076954    8788 addons.go:234] Setting addon storage-provisioner=true in "auto-204600"
	I0514 01:09:06.076954    8788 addons.go:69] Setting default-storageclass=true in profile "auto-204600"
	I0514 01:09:06.077061    8788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-204600"
	I0514 01:09:06.083105    8788 out.go:177] * Verifying Kubernetes components...
	I0514 01:09:06.077061    8788 host.go:66] Checking if "auto-204600" exists ...
	I0514 01:09:06.077061    8788 config.go:182] Loaded profile config "auto-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:09:06.077969    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:09:06.084146    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:09:06.097476    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:09:06.303319    8788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0514 01:09:06.509019    8788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:09:07.053447    8788 start.go:946] {"host.minikube.internal": 172.23.96.1} host record injected into CoreDNS's ConfigMap
	I0514 01:09:07.063629    8788 node_ready.go:35] waiting up to 15m0s for node "auto-204600" to be "Ready" ...
	I0514 01:09:07.102243    8788 node_ready.go:49] node "auto-204600" has status "Ready":"True"
	I0514 01:09:07.102243    8788 node_ready.go:38] duration metric: took 38.6111ms for node "auto-204600" to be "Ready" ...
	I0514 01:09:07.102243    8788 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:09:07.118192    8788 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-9ssxc" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:07.569831    8788 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-204600" context rescaled to 1 replicas
	I0514 01:09:08.381131    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:08.381131    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:08.383346    8788 addons.go:234] Setting addon default-storageclass=true in "auto-204600"
	I0514 01:09:08.383346    8788 host.go:66] Checking if "auto-204600" exists ...
	I0514 01:09:08.384732    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:09:08.399151    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:08.400170    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:08.404161    8788 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0514 01:09:05.889990    7260 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:09:05.897604    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:06.912673    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:09.178526    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:09.180223    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:09.180316    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:08.407765    8788 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0514 01:09:08.407765    8788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0514 01:09:08.407765    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:09:09.146184    8788 pod_ready.go:102] pod "coredns-7db6d8ff4d-9ssxc" in "kube-system" namespace has status "Ready":"False"
	I0514 01:09:10.632402    8788 pod_ready.go:92] pod "coredns-7db6d8ff4d-9ssxc" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:10.632402    8788 pod_ready.go:81] duration metric: took 3.5133842s for pod "coredns-7db6d8ff4d-9ssxc" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.632402    8788 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-rdwpl" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.633213    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:10.633213    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:10.635299    8788 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0514 01:09:10.635299    8788 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0514 01:09:10.635376    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-204600 ).state
	I0514 01:09:10.635852    8788 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-rdwpl" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-rdwpl" not found
	I0514 01:09:10.635914    8788 pod_ready.go:81] duration metric: took 3.5117ms for pod "coredns-7db6d8ff4d-rdwpl" in "kube-system" namespace to be "Ready" ...
	E0514 01:09:10.635914    8788 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-rdwpl" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-rdwpl" not found
	I0514 01:09:10.635975    8788 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.639943    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:10.640005    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:10.640070    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:10.651589    8788 pod_ready.go:92] pod "etcd-auto-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:10.651690    8788 pod_ready.go:81] duration metric: took 15.6627ms for pod "etcd-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.651690    8788 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.662569    8788 pod_ready.go:92] pod "kube-apiserver-auto-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:10.662630    8788 pod_ready.go:81] duration metric: took 10.8859ms for pod "kube-apiserver-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.662630    8788 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.671444    8788 pod_ready.go:92] pod "kube-controller-manager-auto-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:10.671444    8788 pod_ready.go:81] duration metric: took 8.8137ms for pod "kube-controller-manager-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.671444    8788 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-lmjhb" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.836020    8788 pod_ready.go:92] pod "kube-proxy-lmjhb" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:10.836095    8788 pod_ready.go:81] duration metric: took 164.64ms for pod "kube-proxy-lmjhb" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:10.836095    8788 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:11.241493    8788 pod_ready.go:92] pod "kube-scheduler-auto-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:09:11.241493    8788 pod_ready.go:81] duration metric: took 405.3153ms for pod "kube-scheduler-auto-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:09:11.241493    8788 pod_ready.go:38] duration metric: took 4.138976s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:09:11.241493    8788 api_server.go:52] waiting for apiserver process to appear ...
	I0514 01:09:11.253374    8788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:09:11.278079    8788 api_server.go:72] duration metric: took 5.2007802s to wait for apiserver process to appear ...
	I0514 01:09:11.278144    8788 api_server.go:88] waiting for apiserver healthz status ...
	I0514 01:09:11.278144    8788 api_server.go:253] Checking apiserver healthz at https://172.23.105.126:8443/healthz ...
	I0514 01:09:11.283913    8788 api_server.go:279] https://172.23.105.126:8443/healthz returned 200:
	ok
	I0514 01:09:11.287411    8788 api_server.go:141] control plane version: v1.30.0
	I0514 01:09:11.287493    8788 api_server.go:131] duration metric: took 9.2657ms to wait for apiserver health ...
	I0514 01:09:11.287493    8788 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 01:09:11.449503    8788 system_pods.go:59] 6 kube-system pods found
	I0514 01:09:11.449503    8788 system_pods.go:61] "coredns-7db6d8ff4d-9ssxc" [a50f7aa7-22b6-4b44-86aa-bba35968ca6b] Running
	I0514 01:09:11.449503    8788 system_pods.go:61] "etcd-auto-204600" [a88faf6b-6b36-4f32-a559-75553032b986] Running
	I0514 01:09:11.449503    8788 system_pods.go:61] "kube-apiserver-auto-204600" [5d597342-10b3-4a26-b00c-b6b20b276ab4] Running
	I0514 01:09:11.449503    8788 system_pods.go:61] "kube-controller-manager-auto-204600" [18d47d96-bd08-4c2e-87d3-1652140ab6cf] Running
	I0514 01:09:11.449503    8788 system_pods.go:61] "kube-proxy-lmjhb" [fbc73802-4f22-4961-a610-2a7d525f1852] Running
	I0514 01:09:11.449503    8788 system_pods.go:61] "kube-scheduler-auto-204600" [9e713973-3bb0-4361-8a4c-4ab8453f6f84] Running
	I0514 01:09:11.449503    8788 system_pods.go:74] duration metric: took 161.9993ms to wait for pod list to return data ...
	I0514 01:09:11.449503    8788 default_sa.go:34] waiting for default service account to be created ...
	I0514 01:09:11.640939    8788 default_sa.go:45] found service account: "default"
	I0514 01:09:11.640939    8788 default_sa.go:55] duration metric: took 191.4229ms for default service account to be created ...
	I0514 01:09:11.640939    8788 system_pods.go:116] waiting for k8s-apps to be running ...
	I0514 01:09:11.760418    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:11.769948    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:11.770033    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:13.807200    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:13.817386    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:13.817386    7260 machine.go:94] provisionDockerMachine start ...
	I0514 01:09:13.817477    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:11.851348    8788 system_pods.go:86] 6 kube-system pods found
	I0514 01:09:11.851348    8788 system_pods.go:89] "coredns-7db6d8ff4d-9ssxc" [a50f7aa7-22b6-4b44-86aa-bba35968ca6b] Running
	I0514 01:09:11.851348    8788 system_pods.go:89] "etcd-auto-204600" [a88faf6b-6b36-4f32-a559-75553032b986] Running
	I0514 01:09:11.851348    8788 system_pods.go:89] "kube-apiserver-auto-204600" [5d597342-10b3-4a26-b00c-b6b20b276ab4] Running
	I0514 01:09:11.851348    8788 system_pods.go:89] "kube-controller-manager-auto-204600" [18d47d96-bd08-4c2e-87d3-1652140ab6cf] Running
	I0514 01:09:11.851348    8788 system_pods.go:89] "kube-proxy-lmjhb" [fbc73802-4f22-4961-a610-2a7d525f1852] Running
	I0514 01:09:11.851348    8788 system_pods.go:89] "kube-scheduler-auto-204600" [9e713973-3bb0-4361-8a4c-4ab8453f6f84] Running
	I0514 01:09:11.851348    8788 system_pods.go:126] duration metric: took 210.3956ms to wait for k8s-apps to be running ...
	I0514 01:09:11.851348    8788 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 01:09:11.863490    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 01:09:11.889623    8788 system_svc.go:56] duration metric: took 38.2725ms WaitForService to wait for kubelet
	I0514 01:09:11.889732    8788 kubeadm.go:576] duration metric: took 5.8123385s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 01:09:11.889777    8788 node_conditions.go:102] verifying NodePressure condition ...
	I0514 01:09:12.033534    8788 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 01:09:12.033655    8788 node_conditions.go:123] node cpu capacity is 2
	I0514 01:09:12.033655    8788 node_conditions.go:105] duration metric: took 143.8682ms to run NodePressure ...
	I0514 01:09:12.033655    8788 start.go:240] waiting for startup goroutines ...
	I0514 01:09:12.755576    8788 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:12.755576    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:12.756253    8788 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:13.125157    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:09:13.133001    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:13.133491    8788 sshutil.go:53] new ssh client: &{IP:172.23.105.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\auto-204600\id_rsa Username:docker}
	I0514 01:09:13.278014    8788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0514 01:09:15.116521    8788 main.go:141] libmachine: [stdout =====>] : 172.23.105.126
	
	I0514 01:09:15.116627    8788 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:15.116944    8788 sshutil.go:53] new ssh client: &{IP:172.23.105.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\auto-204600\id_rsa Username:docker}
	I0514 01:09:15.252207    8788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0514 01:09:15.455617    8788 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0514 01:09:15.457793    8788 addons.go:505] duration metric: took 9.3803276s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0514 01:09:15.457793    8788 start.go:245] waiting for cluster config update ...
	I0514 01:09:15.457793    8788 start.go:254] writing updated cluster config ...
	I0514 01:09:15.466957    8788 ssh_runner.go:195] Run: rm -f paused
	I0514 01:09:15.585460    8788 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0514 01:09:15.588870    8788 out.go:177] * Done! kubectl is now configured to use "auto-204600" cluster and "default" namespace by default
	I0514 01:09:15.799784    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:15.799784    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:15.799985    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:18.127941    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:18.133803    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:18.139592    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:18.149364    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:18.149364    7260 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 01:09:18.290368    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0514 01:09:18.290368    7260 buildroot.go:166] provisioning hostname "kindnet-204600"
	I0514 01:09:18.290538    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:20.219261    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:20.229414    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:20.229537    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:22.521225    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:22.521225    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:22.535851    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:22.536355    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:22.536355    7260 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-204600 && echo "kindnet-204600" | sudo tee /etc/hostname
	I0514 01:09:22.674787    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-204600
	
	I0514 01:09:22.674898    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:24.631892    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:24.631892    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:24.631892    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:26.938533    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:26.938533    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:26.947558    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:26.947558    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:26.947558    7260 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-204600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-204600/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-204600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 01:09:27.098960    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 01:09:27.099056    7260 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 01:09:27.099176    7260 buildroot.go:174] setting up certificates
	I0514 01:09:27.099176    7260 provision.go:84] configureAuth start
	I0514 01:09:27.099228    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:29.050369    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:29.050369    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:29.050369    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:31.325342    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:31.335505    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:31.335803    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:33.221308    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:33.221308    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:33.221442    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:35.459227    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:35.459227    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:35.459227    7260 provision.go:143] copyHostCerts
	I0514 01:09:35.469704    7260 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 01:09:35.469800    7260 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 01:09:35.470194    7260 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 01:09:35.471658    7260 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 01:09:35.471658    7260 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 01:09:35.472067    7260 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 01:09:35.473344    7260 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 01:09:35.473344    7260 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 01:09:35.473584    7260 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 01:09:35.474598    7260 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-204600 san=[127.0.0.1 172.23.99.4 kindnet-204600 localhost minikube]
	I0514 01:09:35.707150    7260 provision.go:177] copyRemoteCerts
	I0514 01:09:35.717391    7260 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 01:09:35.717391    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:37.603663    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:37.614915    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:37.614915    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:39.818524    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:39.818524    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:39.828708    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:09:39.912975    7260 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1953048s)
	I0514 01:09:39.928400    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 01:09:39.976611    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I0514 01:09:40.018458    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0514 01:09:40.057246    7260 provision.go:87] duration metric: took 12.9572066s to configureAuth
	I0514 01:09:40.061184    7260 buildroot.go:189] setting minikube options for container-runtime
	I0514 01:09:40.061796    7260 config.go:182] Loaded profile config "kindnet-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:09:40.061860    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:42.002199    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:42.011895    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:42.011895    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:44.239008    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:44.239008    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:44.243274    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:44.243643    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:44.243717    7260 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 01:09:44.369401    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 01:09:44.369401    7260 buildroot.go:70] root file system type: tmpfs
	I0514 01:09:44.369661    7260 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 01:09:44.369746    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:46.247716    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:46.257616    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:46.257708    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:48.611098    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:48.611098    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:48.620882    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:48.621685    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:48.621884    7260 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 01:09:48.768796    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 01:09:48.768877    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:50.742762    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:50.742762    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:50.752930    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:53.075215    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:53.075215    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:53.089183    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:09:53.089596    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:09:53.089596    7260 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 01:09:55.150816    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 01:09:55.150918    7260 machine.go:97] duration metric: took 41.3307245s to provisionDockerMachine
	I0514 01:09:55.150918    7260 client.go:171] duration metric: took 1m48.2277718s to LocalClient.Create
	I0514 01:09:55.150971    7260 start.go:167] duration metric: took 1m48.2278251s to libmachine.API.Create "kindnet-204600"
	I0514 01:09:55.151023    7260 start.go:293] postStartSetup for "kindnet-204600" (driver="hyperv")
	I0514 01:09:55.151023    7260 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 01:09:55.161359    7260 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 01:09:55.161359    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:09:57.133615    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:09:57.133615    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:57.133615    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:09:59.509011    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:09:59.509011    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:09:59.509343    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:09:59.607514    7260 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4458576s)
	I0514 01:09:59.616873    7260 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 01:09:59.623651    7260 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 01:09:59.623651    7260 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 01:09:59.624111    7260 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 01:09:59.624694    7260 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 01:09:59.633563    7260 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 01:09:59.654460    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 01:09:59.702904    7260 start.go:296] duration metric: took 4.5515772s for postStartSetup
	I0514 01:09:59.704855    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:01.694691    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:01.705180    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:01.705180    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:04.044546    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:04.044546    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:04.054540    7260 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\config.json ...
	I0514 01:10:04.056717    7260 start.go:128] duration metric: took 1m57.1370716s to createHost
	I0514 01:10:04.056717    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:06.001757    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:06.001757    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:06.011826    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:08.323878    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:08.323878    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:08.339059    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:08.339652    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:10:08.339652    7260 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 01:10:08.469641    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715649008.704828778
	
	I0514 01:10:08.469641    7260 fix.go:216] guest clock: 1715649008.704828778
	I0514 01:10:08.469641    7260 fix.go:229] Guest: 2024-05-14 01:10:08.704828778 +0000 UTC Remote: 2024-05-14 01:10:04.0567177 +0000 UTC m=+394.123370801 (delta=4.648111078s)
	I0514 01:10:08.469641    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:12.821332   14332 start.go:364] duration metric: took 4m34.6390057s to acquireMachinesLock for "pause-851700"
	I0514 01:10:12.822103   14332 start.go:96] Skipping create...Using existing machine configuration
	I0514 01:10:12.822221   14332 fix.go:54] fixHost starting: 
	I0514 01:10:12.823140   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:14.911268   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:14.911268   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:14.911268   14332 fix.go:112] recreateIfNeeded on pause-851700: state=Running err=<nil>
	W0514 01:10:14.911268   14332 fix.go:138] unexpected machine state, will restart: <nil>
	I0514 01:10:14.915701   14332 out.go:177] * Updating the running hyperv "pause-851700" VM ...
	I0514 01:10:10.380641    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:10.380641    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:10.396165    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:12.669176    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:12.679240    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:12.682860    7260 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:12.683218    7260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.99.4 22 <nil> <nil>}
	I0514 01:10:12.683306    7260 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715649008
	I0514 01:10:12.821332    7260 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 01:10:08 UTC 2024
	
	I0514 01:10:12.821332    7260 fix.go:236] clock set: Tue May 14 01:10:08 UTC 2024
	 (err=<nil>)
	I0514 01:10:12.821332    7260 start.go:83] releasing machines lock for "kindnet-204600", held for 2m5.901902s
	I0514 01:10:12.821332    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:14.909118    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:14.909118    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:14.909214    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:14.918205   14332 machine.go:94] provisionDockerMachine start ...
	I0514 01:10:14.918349   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:16.977333   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:16.977333   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:16.977333   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:17.364670    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:17.364744    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:17.368492    7260 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 01:10:17.368492    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:17.381252    7260 ssh_runner.go:195] Run: cat /version.json
	I0514 01:10:17.381252    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:10:19.520155    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:19.526750    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:19.526750    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:19.542132    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:19.542132    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:19.551880    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:19.583159   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:19.583341   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:19.587157   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:19.587708   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:19.587824   14332 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 01:10:19.728093   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-851700
	
	I0514 01:10:19.728093   14332 buildroot.go:166] provisioning hostname "pause-851700"
	I0514 01:10:19.728093   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:21.861174   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:21.861174   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:21.869962   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:22.037603    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:22.047219    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:22.047662    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:10:22.068105    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:10:22.068105    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:22.068105    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:10:22.190882    7260 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8220664s)
	I0514 01:10:22.190882    7260 ssh_runner.go:235] Completed: cat /version.json: (4.809308s)
	I0514 01:10:22.199146    7260 ssh_runner.go:195] Run: systemctl --version
	I0514 01:10:22.217590    7260 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0514 01:10:22.232886    7260 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 01:10:22.241905    7260 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 01:10:22.275867    7260 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 01:10:22.275867    7260 start.go:494] detecting cgroup driver to use...
	I0514 01:10:22.275867    7260 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:10:22.322308    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 01:10:22.353728    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 01:10:22.378183    7260 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 01:10:22.392289    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 01:10:22.430762    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:10:22.469044    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 01:10:22.503620    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:10:22.532652    7260 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 01:10:22.561329    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 01:10:22.588010    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 01:10:22.615533    7260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 01:10:22.644526    7260 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 01:10:22.673652    7260 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 01:10:22.704166    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:22.906252    7260 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 01:10:22.938711    7260 start.go:494] detecting cgroup driver to use...
	I0514 01:10:22.948604    7260 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 01:10:22.980103    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:10:23.020852    7260 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 01:10:23.069011    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:10:23.105894    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:10:23.145278    7260 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 01:10:23.219649    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:10:23.247450    7260 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:10:23.287271    7260 ssh_runner.go:195] Run: which cri-dockerd
	I0514 01:10:23.302125    7260 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 01:10:23.319780    7260 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 01:10:23.368010    7260 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 01:10:23.589014    7260 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 01:10:23.777696    7260 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 01:10:23.777696    7260 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 01:10:23.828626    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:24.009326    7260 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:10:26.542848    7260 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5333528s)
	I0514 01:10:26.553345    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 01:10:26.589522    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:10:26.626809    7260 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 01:10:26.819914    7260 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 01:10:27.004894    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:27.196133    7260 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 01:10:27.233366    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:10:27.269138    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:27.497692    7260 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 01:10:27.607535    7260 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 01:10:27.617698    7260 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 01:10:27.628953    7260 start.go:562] Will wait 60s for crictl version
	I0514 01:10:27.641225    7260 ssh_runner.go:195] Run: which crictl
	I0514 01:10:27.658999    7260 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 01:10:27.714417    7260 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 01:10:27.724853    7260 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:10:27.768877    7260 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:10:24.227559   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:24.227559   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:24.242408   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:24.242753   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:24.242825   14332 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-851700 && echo "pause-851700" | sudo tee /etc/hostname
	I0514 01:10:24.403055   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-851700
	
	I0514 01:10:24.403055   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:26.366671   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:26.366671   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:26.367174   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:27.801216    7260 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 01:10:27.801273    7260 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 01:10:27.805814    7260 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 01:10:27.805814    7260 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 01:10:27.805814    7260 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 01:10:27.805814    7260 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 01:10:27.808854    7260 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 01:10:27.808854    7260 ip.go:210] interface addr: 172.23.96.1/20
	I0514 01:10:27.811796    7260 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 01:10:27.823507    7260 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 01:10:27.844120    7260 kubeadm.go:877] updating cluster {Name:kindnet-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-204600 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:172.23.99.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0514 01:10:27.844120    7260 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 01:10:27.852427    7260 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:10:27.872033    7260 docker.go:685] Got preloaded images: 
	I0514 01:10:27.872033    7260 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0514 01:10:27.880580    7260 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0514 01:10:27.906518    7260 ssh_runner.go:195] Run: which lz4
	I0514 01:10:27.921112    7260 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0514 01:10:27.928436    7260 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0514 01:10:27.928608    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0514 01:10:29.848665    7260 docker.go:649] duration metric: took 1.9355285s to copy over tarball
	I0514 01:10:29.858133    7260 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0514 01:10:28.992515   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:28.992515   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:28.996995   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:28.997533   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:28.997651   14332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-851700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-851700/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-851700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 01:10:29.170768   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 01:10:29.170842   14332 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 01:10:29.170985   14332 buildroot.go:174] setting up certificates
	I0514 01:10:29.170985   14332 provision.go:84] configureAuth start
	I0514 01:10:29.171126   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:31.414956   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:31.414956   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:31.415050   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:33.905433   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:33.905433   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:33.905757   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:35.988267   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:35.988511   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:35.988569   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:36.498206    7260 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.6396284s)
	I0514 01:10:36.498206    7260 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0514 01:10:36.564075    7260 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0514 01:10:38.212074    7260 ssh_runner.go:235] Completed: sudo cat /var/lib/docker/image/overlay2/repositories.json: (1.6478894s)
	I0514 01:10:38.212311    7260 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0514 01:10:38.255060    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:38.480019    7260 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:10:38.427972   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:38.428088   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:38.428088   14332 provision.go:143] copyHostCerts
	I0514 01:10:38.428541   14332 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 01:10:38.428541   14332 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 01:10:38.428991   14332 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 01:10:38.430327   14332 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 01:10:38.430327   14332 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 01:10:38.430755   14332 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 01:10:38.432105   14332 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 01:10:38.432193   14332 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 01:10:38.432562   14332 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 01:10:38.432879   14332 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-851700 san=[127.0.0.1 172.23.111.154 localhost minikube pause-851700]
	I0514 01:10:38.755420   14332 provision.go:177] copyRemoteCerts
	I0514 01:10:38.762813   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 01:10:38.762813   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:40.753825   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:40.754145   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:40.754424   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:43.135802   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:43.135802   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:43.136208   14332 sshutil.go:53] new ssh client: &{IP:172.23.111.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-851700\id_rsa Username:docker}
	I0514 01:10:42.840745    7260 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.3604331s)
	I0514 01:10:42.848377    7260 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:10:42.876276    7260 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0514 01:10:42.876323    7260 cache_images.go:84] Images are preloaded, skipping loading
	I0514 01:10:42.876323    7260 kubeadm.go:928] updating node { 172.23.99.4 8443 v1.30.0 docker true true} ...
	I0514 01:10:42.876488    7260 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-204600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.99.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kindnet-204600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0514 01:10:42.883326    7260 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0514 01:10:42.921695    7260 cni.go:84] Creating CNI manager for "kindnet"
	I0514 01:10:42.921695    7260 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0514 01:10:42.921695    7260 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.99.4 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-204600 NodeName:kindnet-204600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.99.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.99.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0514 01:10:42.921695    7260 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.99.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kindnet-204600"
	  kubeletExtraArgs:
	    node-ip: 172.23.99.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.99.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0514 01:10:42.932008    7260 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 01:10:42.951537    7260 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 01:10:42.960693    7260 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0514 01:10:42.977487    7260 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0514 01:10:43.010336    7260 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 01:10:43.041440    7260 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0514 01:10:43.081580    7260 ssh_runner.go:195] Run: grep 172.23.99.4	control-plane.minikube.internal$ /etc/hosts
	I0514 01:10:43.088064    7260 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.99.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 01:10:43.117920    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:10:43.316727    7260 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:10:43.347198    7260 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600 for IP: 172.23.99.4
	I0514 01:10:43.347247    7260 certs.go:194] generating shared ca certs ...
	I0514 01:10:43.347299    7260 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.348074    7260 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 01:10:43.348493    7260 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 01:10:43.348697    7260 certs.go:256] generating profile certs ...
	I0514 01:10:43.349478    7260 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.key
	I0514 01:10:43.349612    7260 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.crt with IP's: []
	I0514 01:10:43.550567    7260 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.crt ...
	I0514 01:10:43.550567    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.crt: {Name:mk447ec615ac7cfb9098663709e09f31e7a4c310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.550567    7260 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.key ...
	I0514 01:10:43.550567    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\client.key: {Name:mk3b43b340e5f30800ec094c7f26f77520de35c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.552099    7260 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key.239f16d8
	I0514 01:10:43.552099    7260 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt.239f16d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.99.4]
	I0514 01:10:43.706218    7260 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt.239f16d8 ...
	I0514 01:10:43.706320    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt.239f16d8: {Name:mk911640488392ebda6774ce8198951c32666df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.707397    7260 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key.239f16d8 ...
	I0514 01:10:43.707397    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key.239f16d8: {Name:mkb88ea0628e1097285c601ff90a8f1a7bc94dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.708081    7260 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt.239f16d8 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt
	I0514 01:10:43.718957    7260 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key.239f16d8 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key
	I0514 01:10:43.720054    7260 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.key
	I0514 01:10:43.720054    7260 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.crt with IP's: []
	I0514 01:10:43.869082    7260 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.crt ...
	I0514 01:10:43.869082    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.crt: {Name:mk83cdee94f5cafe180c7b2a365086694dc5d50d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.870240    7260 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.key ...
	I0514 01:10:43.870240    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.key: {Name:mk1ba62c1687114848064427cc837edbfc7f4d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:10:43.884191    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 01:10:43.884191    7260 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 01:10:43.884191    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 01:10:43.884191    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 01:10:43.884191    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 01:10:43.885195    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 01:10:43.885195    7260 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 01:10:43.886198    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 01:10:43.932933    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 01:10:43.983918    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 01:10:44.030726    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 01:10:44.075750    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0514 01:10:44.122543    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0514 01:10:44.184814    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0514 01:10:44.250210    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-204600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0514 01:10:44.300040    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 01:10:44.348824    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 01:10:44.405653    7260 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 01:10:44.461631    7260 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0514 01:10:44.510066    7260 ssh_runner.go:195] Run: openssl version
	I0514 01:10:44.530562    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 01:10:44.563076    7260 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 01:10:44.570027    7260 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 01:10:44.581440    7260 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 01:10:44.599449    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 01:10:44.628287    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 01:10:44.662425    7260 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:10:44.669390    7260 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:10:44.681682    7260 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:10:44.702734    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 01:10:44.734298    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 01:10:44.763750    7260 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 01:10:44.770999    7260 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 01:10:44.779980    7260 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 01:10:44.799907    7260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 01:10:44.828691    7260 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 01:10:44.836384    7260 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0514 01:10:44.836799    7260 kubeadm.go:391] StartCluster: {Name:kindnet-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-204600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:172.23.99.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 01:10:44.843481    7260 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 01:10:44.878035    7260 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0514 01:10:44.910366    7260 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0514 01:10:44.939428    7260 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0514 01:10:44.958307    7260 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0514 01:10:44.958307    7260 kubeadm.go:156] found existing configuration files:
	
	I0514 01:10:44.968390    7260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0514 01:10:44.987567    7260 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0514 01:10:44.996199    7260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0514 01:10:45.023764    7260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0514 01:10:45.040789    7260 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0514 01:10:43.248544   14332 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4853659s)
	I0514 01:10:43.248783   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 01:10:43.300337   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0514 01:10:43.365256   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0514 01:10:43.426991   14332 provision.go:87] duration metric: took 14.2550506s to configureAuth
	I0514 01:10:43.426991   14332 buildroot.go:189] setting minikube options for container-runtime
	I0514 01:10:43.427983   14332 config.go:182] Loaded profile config "pause-851700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:10:43.427983   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:45.588673   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:45.588721   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:45.588775   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:48.032161   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:48.032161   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:48.039894   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:48.040515   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:48.040515   14332 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 01:10:45.059219    7260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0514 01:10:45.091232    7260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0514 01:10:45.108284    7260 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0514 01:10:45.117900    7260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0514 01:10:45.145011    7260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0514 01:10:45.162455    7260 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0514 01:10:45.171554    7260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0514 01:10:45.189632    7260 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0514 01:10:45.445222    7260 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0514 01:10:48.178721   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 01:10:48.178780   14332 buildroot.go:70] root file system type: tmpfs
	I0514 01:10:48.179073   14332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 01:10:48.179185   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:50.204433   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:50.204433   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:50.205419   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:52.691891   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:52.691891   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:52.697571   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:52.698068   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:52.698213   14332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 01:10:52.875889   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 01:10:52.875889   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:54.999366   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:54.999366   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:54.999366   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:10:57.407373   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:10:57.407838   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:57.413611   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:10:57.414285   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:10:57.414285   14332 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 01:10:57.576560   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 01:10:57.576560   14332 machine.go:97] duration metric: took 42.6554008s to provisionDockerMachine
	I0514 01:10:57.576560   14332 start.go:293] postStartSetup for "pause-851700" (driver="hyperv")
	I0514 01:10:57.576560   14332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 01:10:57.585609   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 01:10:57.585609   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:10:58.606534    7260 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0514 01:10:58.606656    7260 kubeadm.go:309] [preflight] Running pre-flight checks
	I0514 01:10:58.606881    7260 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0514 01:10:58.607159    7260 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0514 01:10:58.607497    7260 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0514 01:10:58.607648    7260 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0514 01:10:58.610302    7260 out.go:204]   - Generating certificates and keys ...
	I0514 01:10:58.610447    7260 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0514 01:10:58.610560    7260 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0514 01:10:58.610787    7260 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0514 01:10:58.610920    7260 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0514 01:10:58.610989    7260 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0514 01:10:58.611097    7260 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0514 01:10:58.611223    7260 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0514 01:10:58.611407    7260 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-204600 localhost] and IPs [172.23.99.4 127.0.0.1 ::1]
	I0514 01:10:58.611593    7260 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0514 01:10:58.612042    7260 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-204600 localhost] and IPs [172.23.99.4 127.0.0.1 ::1]
	I0514 01:10:58.612042    7260 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0514 01:10:58.612042    7260 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0514 01:10:58.612042    7260 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0514 01:10:58.612732    7260 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0514 01:10:58.612860    7260 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0514 01:10:58.612974    7260 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0514 01:10:58.613100    7260 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0514 01:10:58.613312    7260 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0514 01:10:58.613495    7260 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0514 01:10:58.613495    7260 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0514 01:10:58.613495    7260 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0514 01:10:58.616693    7260 out.go:204]   - Booting up control plane ...
	I0514 01:10:58.616693    7260 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0514 01:10:58.617548    7260 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0514 01:10:58.617805    7260 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0514 01:10:58.618208    7260 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0514 01:10:58.618353    7260 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0514 01:10:58.618353    7260 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0514 01:10:58.618353    7260 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0514 01:10:58.618967    7260 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0514 01:10:58.619054    7260 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001590601s
	I0514 01:10:58.619159    7260 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0514 01:10:58.619159    7260 kubeadm.go:309] [api-check] The API server is healthy after 7.002478877s
	I0514 01:10:58.619159    7260 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0514 01:10:58.619714    7260 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0514 01:10:58.619997    7260 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0514 01:10:58.619997    7260 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-204600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0514 01:10:58.620682    7260 kubeadm.go:309] [bootstrap-token] Using token: bh2oij.hcns315mms4vj5zn
	I0514 01:10:58.624810    7260 out.go:204]   - Configuring RBAC rules ...
	I0514 01:10:58.625001    7260 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0514 01:10:58.625200    7260 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0514 01:10:58.625200    7260 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0514 01:10:58.625200    7260 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0514 01:10:58.626012    7260 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0514 01:10:58.626109    7260 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0514 01:10:58.626600    7260 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0514 01:10:58.626729    7260 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0514 01:10:58.626790    7260 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0514 01:10:58.626868    7260 kubeadm.go:309] 
	I0514 01:10:58.626930    7260 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0514 01:10:58.626930    7260 kubeadm.go:309] 
	I0514 01:10:58.626995    7260 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0514 01:10:58.626995    7260 kubeadm.go:309] 
	I0514 01:10:58.626995    7260 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0514 01:10:58.627220    7260 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0514 01:10:58.627296    7260 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0514 01:10:58.627296    7260 kubeadm.go:309] 
	I0514 01:10:58.627389    7260 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0514 01:10:58.627389    7260 kubeadm.go:309] 
	I0514 01:10:58.627452    7260 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0514 01:10:58.627452    7260 kubeadm.go:309] 
	I0514 01:10:58.627508    7260 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0514 01:10:58.627640    7260 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0514 01:10:58.627701    7260 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0514 01:10:58.627701    7260 kubeadm.go:309] 
	I0514 01:10:58.627823    7260 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0514 01:10:58.627950    7260 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0514 01:10:58.627950    7260 kubeadm.go:309] 
	I0514 01:10:58.628113    7260 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bh2oij.hcns315mms4vj5zn \
	I0514 01:10:58.628310    7260 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 \
	I0514 01:10:58.628370    7260 kubeadm.go:309] 	--control-plane 
	I0514 01:10:58.628455    7260 kubeadm.go:309] 
	I0514 01:10:58.628601    7260 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0514 01:10:58.628601    7260 kubeadm.go:309] 
	I0514 01:10:58.628733    7260 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bh2oij.hcns315mms4vj5zn \
	I0514 01:10:58.628929    7260 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:51db40348d5ebebb4bad7ce69954405a1c01690d495025e3f099a6a8e8620f86 
	I0514 01:10:58.628994    7260 cni.go:84] Creating CNI manager for "kindnet"
	I0514 01:10:58.631014    7260 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0514 01:10:58.642719    7260 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0514 01:10:58.652672    7260 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0514 01:10:58.652672    7260 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0514 01:10:58.704614    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0514 01:10:59.070161    7260 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0514 01:10:59.080484    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:10:59.083116    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-204600 minikube.k8s.io/updated_at=2024_05_14T01_10_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=kindnet-204600 minikube.k8s.io/primary=true
	I0514 01:10:59.090082    7260 ops.go:34] apiserver oom_adj: -16
	I0514 01:10:59.234929    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:10:59.747145    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:10:59.677012   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:10:59.677451   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:10:59.677451   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:02.069619   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:02.069619   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:02.070048   14332 sshutil.go:53] new ssh client: &{IP:172.23.111.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-851700\id_rsa Username:docker}
	I0514 01:11:02.190079   14332 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6040499s)
	I0514 01:11:02.197949   14332 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 01:11:02.205403   14332 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 01:11:02.205438   14332 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 01:11:02.205438   14332 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 01:11:02.206398   14332 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 01:11:02.214998   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 01:11:02.239327   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 01:11:02.297683   14332 start.go:296] duration metric: took 4.7208061s for postStartSetup
	I0514 01:11:02.297683   14332 fix.go:56] duration metric: took 49.4722639s for fixHost
	I0514 01:11:02.297683   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:11:00.234360    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:00.741205    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:01.252042    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:01.738136    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:02.240993    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:02.754950    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:03.237699    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:03.746478    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:04.236777    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:04.738529    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:04.386721   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:04.387203   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:04.387578   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:06.809630   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:06.809630   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:06.814226   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:11:06.814650   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:11:06.814650   14332 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 01:11:06.955236   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715649067.199006560
	
	I0514 01:11:06.955350   14332 fix.go:216] guest clock: 1715649067.199006560
	I0514 01:11:06.955350   14332 fix.go:229] Guest: 2024-05-14 01:11:07.19900656 +0000 UTC Remote: 2024-05-14 01:11:02.2976836 +0000 UTC m=+329.263475201 (delta=4.90132296s)
	I0514 01:11:06.955488   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:11:05.247554    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:05.740457    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:06.246371    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:06.737217    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:07.242065    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:07.735442    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:08.239331    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:08.742830    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:09.249739    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:09.740793    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:11.558829     744 start.go:364] duration metric: took 3m16.9120997s to acquireMachinesLock for "calico-204600"
	I0514 01:11:11.559247     744 start.go:93] Provisioning new machine with config: &{Name:calico-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-204600 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 01:11:11.559247     744 start.go:125] createHost starting for "" (driver="hyperv")
	I0514 01:11:10.248843    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:10.745666    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:11.247226    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:11.748765    7260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0514 01:11:11.917542    7260 kubeadm.go:1107] duration metric: took 12.8465186s to wait for elevateKubeSystemPrivileges
	W0514 01:11:11.917542    7260 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0514 01:11:11.917542    7260 kubeadm.go:393] duration metric: took 27.0789828s to StartCluster
	I0514 01:11:11.917542    7260 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:11:11.917542    7260 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 01:11:11.920548    7260 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:11:11.922548    7260 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0514 01:11:11.922548    7260 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0514 01:11:11.922548    7260 start.go:234] Will wait 15m0s for node &{Name: IP:172.23.99.4 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 01:11:11.922548    7260 addons.go:69] Setting storage-provisioner=true in profile "kindnet-204600"
	I0514 01:11:11.925546    7260 out.go:177] * Verifying Kubernetes components...
	I0514 01:11:11.922548    7260 addons.go:69] Setting default-storageclass=true in profile "kindnet-204600"
	I0514 01:11:11.922548    7260 addons.go:234] Setting addon storage-provisioner=true in "kindnet-204600"
	I0514 01:11:11.922548    7260 config.go:182] Loaded profile config "kindnet-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:11:08.988103   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:08.988103   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:08.988828   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:11.394561   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:11.394561   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:11.400109   14332 main.go:141] libmachine: Using SSH client type: native
	I0514 01:11:11.400961   14332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.111.154 22 <nil> <nil>}
	I0514 01:11:11.401029   14332 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715649066
	I0514 01:11:11.558145   14332 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 01:11:06 UTC 2024
	
	I0514 01:11:11.558145   14332 fix.go:236] clock set: Tue May 14 01:11:06 UTC 2024
	 (err=<nil>)
	I0514 01:11:11.558145   14332 start.go:83] releasing machines lock for "pause-851700", held for 58.7323315s
	I0514 01:11:11.558145   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:11:11.562752     744 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0514 01:11:11.562752     744 start.go:159] libmachine.API.Create for "calico-204600" (driver="hyperv")
	I0514 01:11:11.562752     744 client.go:168] LocalClient.Create starting
	I0514 01:11:11.563746     744 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0514 01:11:11.563746     744 main.go:141] libmachine: Decoding PEM data...
	I0514 01:11:11.563746     744 main.go:141] libmachine: Parsing certificate...
	I0514 01:11:11.563746     744 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0514 01:11:11.563746     744 main.go:141] libmachine: Decoding PEM data...
	I0514 01:11:11.563746     744 main.go:141] libmachine: Parsing certificate...
	I0514 01:11:11.563746     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0514 01:11:13.999122     744 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0514 01:11:13.999747     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:13.999747     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0514 01:11:11.925546    7260 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-204600"
	I0514 01:11:11.925546    7260 host.go:66] Checking if "kindnet-204600" exists ...
	I0514 01:11:11.929548    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:11:11.929548    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:11:11.945834    7260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:12.341622    7260 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0514 01:11:12.531903    7260 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:11:13.074037    7260 start.go:946] {"host.minikube.internal": 172.23.96.1} host record injected into CoreDNS's ConfigMap
	I0514 01:11:13.078717    7260 node_ready.go:35] waiting up to 15m0s for node "kindnet-204600" to be "Ready" ...
	I0514 01:11:13.603219    7260 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-204600" context rescaled to 1 replicas
	I0514 01:11:14.769804    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:14.769804    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:14.771995    7260 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0514 01:11:14.775269    7260 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0514 01:11:14.775269    7260 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0514 01:11:14.775269    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:11:14.805525    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:14.805525    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:14.807684    7260 addons.go:234] Setting addon default-storageclass=true in "kindnet-204600"
	I0514 01:11:14.807872    7260 host.go:66] Checking if "kindnet-204600" exists ...
	I0514 01:11:14.808877    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:11:14.329898   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:14.330092   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:14.330152   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:17.629166   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:17.629236   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:17.632900   14332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 01:11:17.632900   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:11:17.648771   14332 ssh_runner.go:195] Run: cat /version.json
	I0514 01:11:17.648771   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-851700 ).state
	I0514 01:11:16.322054     744 main.go:141] libmachine: [stdout =====>] : False
	
	I0514 01:11:16.322415     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:16.322415     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0514 01:11:18.166886     744 main.go:141] libmachine: [stdout =====>] : True
	
	I0514 01:11:18.167885     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:18.167885     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0514 01:11:15.100642    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:17.540567    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:17.540567    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:17.540567    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:17.540567    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:17.540567    7260 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0514 01:11:17.540567    7260 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0514 01:11:17.541580    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-204600 ).state
	I0514 01:11:17.541580    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:17.596609    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:20.373013   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:20.373152   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:20.373342   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:20.376564   14332 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:20.376564   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:20.376564   14332 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-851700 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:20.100006    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:20.345685    7260 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:20.345685    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:20.345685    7260 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:20.696662    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:11:20.696662    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:20.697172    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:11:20.966855    7260 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0514 01:11:22.254590    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:22.375196    7260 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4082463s)
	I0514 01:11:23.372382    7260 main.go:141] libmachine: [stdout =====>] : 172.23.99.4
	
	I0514 01:11:23.372382    7260 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:23.373538    7260 sshutil.go:53] new ssh client: &{IP:172.23.99.4 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-204600\id_rsa Username:docker}
	I0514 01:11:23.529801    7260 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0514 01:11:23.728692    7260 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0514 01:11:23.035743     744 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0514 01:11:23.035743     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:23.037971     744 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-amd64.iso...
	I0514 01:11:23.380896     744 main.go:141] libmachine: Creating SSH key...
	I0514 01:11:23.696596     744 main.go:141] libmachine: Creating VM...
	I0514 01:11:23.696596     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0514 01:11:23.730819    7260 addons.go:505] duration metric: took 11.8074794s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0514 01:11:24.592592    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:23.225173   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:23.225173   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:23.225856   14332 sshutil.go:53] new ssh client: &{IP:172.23.111.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-851700\id_rsa Username:docker}
	I0514 01:11:23.291941   14332 main.go:141] libmachine: [stdout =====>] : 172.23.111.154
	
	I0514 01:11:23.292042   14332 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:23.292631   14332 sshutil.go:53] new ssh client: &{IP:172.23.111.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-851700\id_rsa Username:docker}
	I0514 01:11:25.323341   14332 ssh_runner.go:235] Completed: cat /version.json: (7.6739533s)
	I0514 01:11:25.323421   14332 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.6899247s)
	W0514 01:11:25.323421   14332 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0514 01:11:25.324085   14332 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0514 01:11:25.324085   14332 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0514 01:11:25.333326   14332 ssh_runner.go:195] Run: systemctl --version
	I0514 01:11:25.353326   14332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0514 01:11:25.363137   14332 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 01:11:25.374635   14332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 01:11:25.397159   14332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0514 01:11:25.397159   14332 start.go:494] detecting cgroup driver to use...
	I0514 01:11:25.397159   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:11:25.453746   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 01:11:25.482743   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 01:11:25.507103   14332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 01:11:25.519812   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 01:11:25.556261   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:11:25.590710   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 01:11:25.623667   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:11:25.658914   14332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 01:11:25.691844   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 01:11:25.729682   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 01:11:25.768560   14332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 01:11:25.800694   14332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 01:11:25.834878   14332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 01:11:25.870474   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:26.130380   14332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 01:11:26.161369   14332 start.go:494] detecting cgroup driver to use...
	I0514 01:11:26.170904   14332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 01:11:26.204385   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:11:26.235750   14332 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 01:11:26.273627   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:11:26.305869   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:11:26.330914   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:11:26.380216   14332 ssh_runner.go:195] Run: which cri-dockerd
	I0514 01:11:26.404605   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 01:11:26.430070   14332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 01:11:26.478874   14332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 01:11:26.782949   14332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 01:11:27.047745   14332 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 01:11:27.047745   14332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 01:11:27.102874   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:27.360714   14332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:11:26.654945     744 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0514 01:11:26.655105     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:26.655353     744 main.go:141] libmachine: Using switch "Default Switch"
	I0514 01:11:26.655581     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0514 01:11:28.381263     744 main.go:141] libmachine: [stdout =====>] : True
	
	I0514 01:11:28.381263     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:28.382217     744 main.go:141] libmachine: Creating VHD
	I0514 01:11:28.382217     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0514 01:11:26.598938    7260 node_ready.go:53] node "kindnet-204600" has status "Ready":"False"
	I0514 01:11:28.086004    7260 node_ready.go:49] node "kindnet-204600" has status "Ready":"True"
	I0514 01:11:28.086004    7260 node_ready.go:38] duration metric: took 15.0062801s for node "kindnet-204600" to be "Ready" ...
	I0514 01:11:28.086004    7260 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:11:28.099306    7260 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-g9c6b" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.119549    7260 pod_ready.go:102] pod "coredns-7db6d8ff4d-g9c6b" in "kube-system" namespace has status "Ready":"False"
	I0514 01:11:30.608479    7260 pod_ready.go:92] pod "coredns-7db6d8ff4d-g9c6b" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:30.609013    7260 pod_ready.go:81] duration metric: took 2.5095379s for pod "coredns-7db6d8ff4d-g9c6b" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.609061    7260 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.616764    7260 pod_ready.go:92] pod "etcd-kindnet-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:30.616816    7260 pod_ready.go:81] duration metric: took 7.7113ms for pod "etcd-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.616867    7260 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.624296    7260 pod_ready.go:92] pod "kube-apiserver-kindnet-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:30.624385    7260 pod_ready.go:81] duration metric: took 7.5183ms for pod "kube-apiserver-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.624385    7260 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.630738    7260 pod_ready.go:92] pod "kube-controller-manager-kindnet-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:30.630738    7260 pod_ready.go:81] duration metric: took 6.3518ms for pod "kube-controller-manager-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.630840    7260 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-9k6gx" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.637022    7260 pod_ready.go:92] pod "kube-proxy-9k6gx" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:30.637022    7260 pod_ready.go:81] duration metric: took 6.181ms for pod "kube-proxy-9k6gx" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:30.637022    7260 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:31.015166    7260 pod_ready.go:92] pod "kube-scheduler-kindnet-204600" in "kube-system" namespace has status "Ready":"True"
	I0514 01:11:31.015262    7260 pod_ready.go:81] duration metric: took 378.2154ms for pod "kube-scheduler-kindnet-204600" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:31.015262    7260 pod_ready.go:38] duration metric: took 2.929062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:11:31.015423    7260 api_server.go:52] waiting for apiserver process to appear ...
	I0514 01:11:31.024474    7260 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:31.051415    7260 api_server.go:72] duration metric: took 19.1275836s to wait for apiserver process to appear ...
	I0514 01:11:31.051415    7260 api_server.go:88] waiting for apiserver healthz status ...
	I0514 01:11:31.051557    7260 api_server.go:253] Checking apiserver healthz at https://172.23.99.4:8443/healthz ...
	I0514 01:11:31.058236    7260 api_server.go:279] https://172.23.99.4:8443/healthz returned 200:
	ok
	I0514 01:11:31.060098    7260 api_server.go:141] control plane version: v1.30.0
	I0514 01:11:31.060154    7260 api_server.go:131] duration metric: took 8.5971ms to wait for apiserver health ...
	I0514 01:11:31.060154    7260 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 01:11:31.227120    7260 system_pods.go:59] 8 kube-system pods found
	I0514 01:11:31.227220    7260 system_pods.go:61] "coredns-7db6d8ff4d-g9c6b" [14fe8949-2d6f-4cc4-875a-41906f555bb8] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "etcd-kindnet-204600" [e223fa6d-7886-4c3f-9fb6-d62b585aa2e5] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "kindnet-cfmvs" [b7d81597-9401-4ec6-8ea6-b8896d7c01ee] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "kube-apiserver-kindnet-204600" [7443477a-7ade-4949-aa52-2f8c64653fa3] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "kube-controller-manager-kindnet-204600" [da7d4ca0-9ce7-4321-aee0-11feae96f366] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "kube-proxy-9k6gx" [fbc00844-bd79-4bc5-8a77-92dd79a5ab69] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "kube-scheduler-kindnet-204600" [7c26b954-6434-4f90-946a-cadb9459e8e1] Running
	I0514 01:11:31.227220    7260 system_pods.go:61] "storage-provisioner" [30aca202-5988-46db-b78b-5a14a898ecc0] Running
	I0514 01:11:31.227220    7260 system_pods.go:74] duration metric: took 167.054ms to wait for pod list to return data ...
	I0514 01:11:31.227220    7260 default_sa.go:34] waiting for default service account to be created ...
	I0514 01:11:31.408666    7260 default_sa.go:45] found service account: "default"
	I0514 01:11:31.408666    7260 default_sa.go:55] duration metric: took 181.4342ms for default service account to be created ...
	I0514 01:11:31.408666    7260 system_pods.go:116] waiting for k8s-apps to be running ...
	I0514 01:11:31.618504    7260 system_pods.go:86] 8 kube-system pods found
	I0514 01:11:31.618504    7260 system_pods.go:89] "coredns-7db6d8ff4d-g9c6b" [14fe8949-2d6f-4cc4-875a-41906f555bb8] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "etcd-kindnet-204600" [e223fa6d-7886-4c3f-9fb6-d62b585aa2e5] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "kindnet-cfmvs" [b7d81597-9401-4ec6-8ea6-b8896d7c01ee] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "kube-apiserver-kindnet-204600" [7443477a-7ade-4949-aa52-2f8c64653fa3] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "kube-controller-manager-kindnet-204600" [da7d4ca0-9ce7-4321-aee0-11feae96f366] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "kube-proxy-9k6gx" [fbc00844-bd79-4bc5-8a77-92dd79a5ab69] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "kube-scheduler-kindnet-204600" [7c26b954-6434-4f90-946a-cadb9459e8e1] Running
	I0514 01:11:31.618574    7260 system_pods.go:89] "storage-provisioner" [30aca202-5988-46db-b78b-5a14a898ecc0] Running
	I0514 01:11:31.618574    7260 system_pods.go:126] duration metric: took 209.8939ms to wait for k8s-apps to be running ...
	I0514 01:11:31.618574    7260 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 01:11:31.628490    7260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 01:11:31.652285    7260 system_svc.go:56] duration metric: took 33.6058ms WaitForService to wait for kubelet
	I0514 01:11:31.652285    7260 kubeadm.go:576] duration metric: took 19.7284135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 01:11:31.652384    7260 node_conditions.go:102] verifying NodePressure condition ...
	I0514 01:11:31.807374    7260 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 01:11:31.807482    7260 node_conditions.go:123] node cpu capacity is 2
	I0514 01:11:31.807482    7260 node_conditions.go:105] duration metric: took 155.0873ms to run NodePressure ...
	I0514 01:11:31.807482    7260 start.go:240] waiting for startup goroutines ...
	I0514 01:11:31.807482    7260 start.go:245] waiting for cluster config update ...
	I0514 01:11:31.807482    7260 start.go:254] writing updated cluster config ...
	I0514 01:11:31.816289    7260 ssh_runner.go:195] Run: rm -f paused
	I0514 01:11:31.939225    7260 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0514 01:11:31.945228    7260 out.go:177] * Done! kubectl is now configured to use "kindnet-204600" cluster and "default" namespace by default
	I0514 01:11:32.075105     744 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 91817EFD-298A-4F06-B898-93D1B41E87FD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0514 01:11:32.075105     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:32.075105     744 main.go:141] libmachine: Writing magic tar header
	I0514 01:11:32.075209     744 main.go:141] libmachine: Writing SSH key tar header
	I0514 01:11:32.083358     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0514 01:11:35.191325     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:35.192286     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:35.192286     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\disk.vhd' -SizeBytes 20000MB
	I0514 01:11:37.636592     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:37.636592     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:37.636592     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM calico-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0514 01:11:40.399684   14332 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.0369443s)
	I0514 01:11:40.413127   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 01:11:40.462182   14332 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0514 01:11:40.530497   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:11:40.572672   14332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 01:11:40.799257   14332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 01:11:41.040161   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:41.260173   14332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 01:11:41.308922   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:11:41.342730   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:41.578325   14332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 01:11:41.733566   14332 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 01:11:41.747054   14332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 01:11:41.767793   14332 start.go:562] Will wait 60s for crictl version
	I0514 01:11:41.778790   14332 ssh_runner.go:195] Run: which crictl
	I0514 01:11:41.807433   14332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 01:11:41.873479   14332 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 01:11:41.880479   14332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:11:41.924471   14332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:11:41.975835   14332 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 01:11:41.976024   14332 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 01:11:41.980632   14332 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 01:11:41.980632   14332 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 01:11:41.980632   14332 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 01:11:41.980632   14332 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 01:11:41.983734   14332 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 01:11:41.983734   14332 ip.go:210] interface addr: 172.23.96.1/20
	I0514 01:11:41.992765   14332 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 01:11:42.002212   14332 kubeadm.go:877] updating cluster {Name:pause-851700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-851700 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.111.154 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0514 01:11:42.002212   14332 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 01:11:42.011915   14332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:11:42.039415   14332 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0514 01:11:42.039494   14332 docker.go:615] Images already preloaded, skipping extraction
	I0514 01:11:42.047964   14332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:11:42.074506   14332 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0514 01:11:42.074506   14332 cache_images.go:84] Images are preloaded, skipping loading
	I0514 01:11:42.074506   14332 kubeadm.go:928] updating node { 172.23.111.154 8443 v1.30.0 docker true true} ...
	I0514 01:11:42.074506   14332 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-851700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.111.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-851700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0514 01:11:42.083992   14332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0514 01:11:42.119963   14332 cni.go:84] Creating CNI manager for ""
	I0514 01:11:42.120051   14332 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0514 01:11:42.120051   14332 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0514 01:11:42.120146   14332 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.111.154 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-851700 NodeName:pause-851700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.111.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.111.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0514 01:11:42.120322   14332 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.111.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-851700"
	  kubeletExtraArgs:
	    node-ip: 172.23.111.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.111.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0514 01:11:42.131148   14332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0514 01:11:42.151031   14332 binaries.go:44] Found k8s binaries, skipping transfer
	I0514 01:11:42.165837   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0514 01:11:42.185242   14332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0514 01:11:42.218813   14332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0514 01:11:42.253832   14332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0514 01:11:42.297217   14332 ssh_runner.go:195] Run: grep 172.23.111.154	control-plane.minikube.internal$ /etc/hosts
	I0514 01:11:42.318126   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:11:42.619893   14332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:11:42.669123   14332 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700 for IP: 172.23.111.154
	I0514 01:11:42.669197   14332 certs.go:194] generating shared ca certs ...
	I0514 01:11:42.669197   14332 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:11:42.669837   14332 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0514 01:11:42.669837   14332 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0514 01:11:42.670448   14332 certs.go:256] generating profile certs ...
	I0514 01:11:42.671214   14332 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\client.key
	I0514 01:11:42.671641   14332 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\apiserver.key.0c09c35c
	I0514 01:11:42.672060   14332 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\proxy-client.key
	I0514 01:11:42.673278   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem (1338 bytes)
	W0514 01:11:42.673833   14332 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984_empty.pem, impossibly tiny 0 bytes
	I0514 01:11:42.674042   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0514 01:11:42.674275   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0514 01:11:42.674681   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0514 01:11:42.675024   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0514 01:11:42.675621   14332 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem (1708 bytes)
	I0514 01:11:42.677804   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0514 01:11:42.774965   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0514 01:11:42.849727   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0514 01:11:42.923725   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0514 01:11:42.990331   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0514 01:11:43.049736   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0514 01:11:43.101781   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0514 01:11:41.256720     744 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	calico-204600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0514 01:11:41.256720     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:41.256720     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName calico-204600 -DynamicMemoryEnabled $false
	I0514 01:11:43.779833     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:43.779833     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:43.779833     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor calico-204600 -Count 2
	I0514 01:11:43.180030   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\pause-851700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0514 01:11:43.261761   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0514 01:11:43.321809   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\5984.pem --> /usr/share/ca-certificates/5984.pem (1338 bytes)
	I0514 01:11:43.403140   14332 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /usr/share/ca-certificates/59842.pem (1708 bytes)
	I0514 01:11:43.481818   14332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0514 01:11:43.537331   14332 ssh_runner.go:195] Run: openssl version
	I0514 01:11:43.557614   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5984.pem && ln -fs /usr/share/ca-certificates/5984.pem /etc/ssl/certs/5984.pem"
	I0514 01:11:43.598548   14332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5984.pem
	I0514 01:11:43.609063   14332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 13 22:38 /usr/share/ca-certificates/5984.pem
	I0514 01:11:43.618911   14332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5984.pem
	I0514 01:11:43.643552   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5984.pem /etc/ssl/certs/51391683.0"
	I0514 01:11:43.721054   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59842.pem && ln -fs /usr/share/ca-certificates/59842.pem /etc/ssl/certs/59842.pem"
	I0514 01:11:43.757838   14332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59842.pem
	I0514 01:11:43.764838   14332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 13 22:38 /usr/share/ca-certificates/59842.pem
	I0514 01:11:43.778835   14332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59842.pem
	I0514 01:11:43.820776   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59842.pem /etc/ssl/certs/3ec20f2e.0"
	I0514 01:11:43.857792   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0514 01:11:43.912009   14332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:11:43.927755   14332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 13 22:24 /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:11:43.942026   14332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0514 01:11:43.968029   14332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0514 01:11:44.016037   14332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0514 01:11:44.034768   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0514 01:11:44.058495   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0514 01:11:44.079118   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0514 01:11:44.098002   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0514 01:11:44.122882   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0514 01:11:44.155889   14332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0514 01:11:44.171511   14332 kubeadm.go:391] StartCluster: {Name:pause-851700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-851700 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.111.154 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:fal
se registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0514 01:11:44.184352   14332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 01:11:44.243807   14332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0514 01:11:44.279197   14332 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0514 01:11:44.279197   14332 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0514 01:11:44.279197   14332 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0514 01:11:44.292823   14332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0514 01:11:44.333410   14332 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0514 01:11:44.334859   14332 kubeconfig.go:125] found "pause-851700" server: "https://172.23.111.154:8443"
	I0514 01:11:44.338795   14332 kapi.go:59] client config for pause-851700: &rest.Config{Host:"https://172.23.111.154:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\pause-851700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\pause-851700\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2178ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0514 01:11:44.350812   14332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0514 01:11:44.381795   14332 kubeadm.go:624] The running cluster does not require reconfiguration: 172.23.111.154
	I0514 01:11:44.382800   14332 kubeadm.go:1154] stopping kube-system containers ...
	I0514 01:11:44.391837   14332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0514 01:11:44.480445   14332 docker.go:483] Stopping containers: [6088c2f87d78 3aa29f1051a6 07a402b65f7b 0546d4d05920 18eaec56489e 798a552412b8 f132fb594539 f388a99b7b43 5e24fe2e11bc c10f377eb282 478154bf5b5d 622c6ea48abc 7f4ef90b527b 42e4b7e0c0f9 9ed92f927933 d83b1ad1e1b8 d811e1abea1c bf69bb42be15 193d347f287d 7bd3613875f3 57d32ddf206f e8320cd44a55 e5c2689660d3 e2620eeb5a5e 393373d0eda5]
	I0514 01:11:44.491102   14332 ssh_runner.go:195] Run: docker stop 6088c2f87d78 3aa29f1051a6 07a402b65f7b 0546d4d05920 18eaec56489e 798a552412b8 f132fb594539 f388a99b7b43 5e24fe2e11bc c10f377eb282 478154bf5b5d 622c6ea48abc 7f4ef90b527b 42e4b7e0c0f9 9ed92f927933 d83b1ad1e1b8 d811e1abea1c bf69bb42be15 193d347f287d 7bd3613875f3 57d32ddf206f e8320cd44a55 e5c2689660d3 e2620eeb5a5e 393373d0eda5
	I0514 01:11:45.431991   14332 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0514 01:11:45.532784   14332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0514 01:11:45.559621   14332 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 May 14 01:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May 14 01:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 May 14 01:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 May 14 01:04 /etc/kubernetes/scheduler.conf
	
	I0514 01:11:45.569676   14332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0514 01:11:45.602764   14332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0514 01:11:45.640275   14332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0514 01:11:45.678948   14332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0514 01:11:45.687924   14332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0514 01:11:45.713930   14332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0514 01:11:45.731769   14332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0514 01:11:45.741425   14332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0514 01:11:45.770753   14332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0514 01:11:45.792625   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:45.899026   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:47.026918   14332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1277378s)
	I0514 01:11:47.026918   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:47.352595   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:47.493976   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:47.639965   14332 api_server.go:52] waiting for apiserver process to appear ...
	I0514 01:11:47.655817   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:48.165248   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:46.229057     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:46.229368     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:46.229437     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName calico-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\boot2docker.iso'
	I0514 01:11:48.890743     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:48.890743     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:48.891521     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName calico-204600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\disk.vhd'
	I0514 01:11:48.652761   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:49.151491   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:49.661443   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:11:49.694263   14332 api_server.go:72] duration metric: took 2.0541028s to wait for apiserver process to appear ...
	I0514 01:11:49.694316   14332 api_server.go:88] waiting for apiserver healthz status ...
	I0514 01:11:49.694374   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:51.559622     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:51.559675     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:51.559675     744 main.go:141] libmachine: Starting VM...
	I0514 01:11:51.559758     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM calico-204600
	I0514 01:11:53.705984   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0514 01:11:53.706380   14332 api_server.go:103] status: https://172.23.111.154:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0514 01:11:53.706380   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:53.778025   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0514 01:11:53.778025   14332 api_server.go:103] status: https://172.23.111.154:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0514 01:11:54.202905   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:54.211460   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 01:11:54.211604   14332 api_server.go:103] status: https://172.23.111.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 01:11:54.708616   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:54.717589   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 01:11:54.717589   14332 api_server.go:103] status: https://172.23.111.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 01:11:55.196866   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:55.222580   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0514 01:11:55.223265   14332 api_server.go:103] status: https://172.23.111.154:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0514 01:11:55.703049   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:11:55.710266   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 200:
	ok
	I0514 01:11:55.728100   14332 api_server.go:141] control plane version: v1.30.0
	I0514 01:11:55.728100   14332 api_server.go:131] duration metric: took 6.0333783s to wait for apiserver health ...
	I0514 01:11:55.728100   14332 cni.go:84] Creating CNI manager for ""
	I0514 01:11:55.728100   14332 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0514 01:11:55.731229   14332 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0514 01:11:55.742029   14332 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0514 01:11:55.770353   14332 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0514 01:11:55.825530   14332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 01:11:55.854958   14332 system_pods.go:59] 6 kube-system pods found
	I0514 01:11:55.854958   14332 system_pods.go:61] "coredns-7db6d8ff4d-ntqd5" [10fdf7e7-0874-4abd-911e-88f6950f220a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0514 01:11:55.854958   14332 system_pods.go:61] "etcd-pause-851700" [8f211517-c814-49ef-ac6c-f22b10e36b62] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0514 01:11:55.854958   14332 system_pods.go:61] "kube-apiserver-pause-851700" [7bd68de3-ee66-48ce-899b-a7be9c13339c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0514 01:11:55.854958   14332 system_pods.go:61] "kube-controller-manager-pause-851700" [1dfabfcc-5216-403e-bc07-ca5f978e5435] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0514 01:11:55.854958   14332 system_pods.go:61] "kube-proxy-8qgfs" [0214f901-7bdf-4eab-81a1-5f041f2be6c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0514 01:11:55.854958   14332 system_pods.go:61] "kube-scheduler-pause-851700" [e1db2a1e-d04b-45ff-9ee0-f1fcf52b420f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0514 01:11:55.854958   14332 system_pods.go:74] duration metric: took 29.4267ms to wait for pod list to return data ...
	I0514 01:11:55.854958   14332 node_conditions.go:102] verifying NodePressure condition ...
	I0514 01:11:55.900459   14332 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 01:11:55.900543   14332 node_conditions.go:123] node cpu capacity is 2
	I0514 01:11:55.900543   14332 node_conditions.go:105] duration metric: took 45.582ms to run NodePressure ...
	I0514 01:11:55.900543   14332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0514 01:11:56.498857   14332 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0514 01:11:56.505938   14332 kubeadm.go:733] kubelet initialised
	I0514 01:11:56.505938   14332 kubeadm.go:734] duration metric: took 7.0804ms waiting for restarted kubelet to initialise ...
	I0514 01:11:56.505938   14332 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:11:56.516515   14332 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:54.842128     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:54.842230     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:54.842230     744 main.go:141] libmachine: Waiting for host to start...
	I0514 01:11:54.842305     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:11:57.248239     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:11:57.248835     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:11:57.248961     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:11:58.534074   14332 pod_ready.go:102] pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace has status "Ready":"False"
	I0514 01:12:00.537488   14332 pod_ready.go:102] pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace has status "Ready":"False"
	I0514 01:12:01.531177   14332 pod_ready.go:92] pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:01.531262   14332 pod_ready.go:81] duration metric: took 5.0144106s for pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:01.531262   14332 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:11:59.688055     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:11:59.688055     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:00.701210     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:02.883972     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:02.884160     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:02.884160     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:03.556950   14332 pod_ready.go:102] pod "etcd-pause-851700" in "kube-system" namespace has status "Ready":"False"
	I0514 01:12:06.056861   14332 pod_ready.go:102] pod "etcd-pause-851700" in "kube-system" namespace has status "Ready":"False"
	I0514 01:12:05.342786     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:12:05.343668     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:06.349348     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:08.644997     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:08.644997     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:08.644997     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:08.543040   14332 pod_ready.go:102] pod "etcd-pause-851700" in "kube-system" namespace has status "Ready":"False"
	I0514 01:12:10.052481   14332 pod_ready.go:92] pod "etcd-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.052546   14332 pod_ready.go:81] duration metric: took 8.5207117s for pod "etcd-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.052606   14332 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.062843   14332 pod_ready.go:92] pod "kube-apiserver-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.062843   14332 pod_ready.go:81] duration metric: took 10.2362ms for pod "kube-apiserver-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.062843   14332 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.071180   14332 pod_ready.go:92] pod "kube-controller-manager-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.071231   14332 pod_ready.go:81] duration metric: took 8.3882ms for pod "kube-controller-manager-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.071231   14332 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8qgfs" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.078893   14332 pod_ready.go:92] pod "kube-proxy-8qgfs" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.078947   14332 pod_ready.go:81] duration metric: took 7.7149ms for pod "kube-proxy-8qgfs" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.078947   14332 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.085021   14332 pod_ready.go:92] pod "kube-scheduler-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.085021   14332 pod_ready.go:81] duration metric: took 6.0739ms for pod "kube-scheduler-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.085021   14332 pod_ready.go:38] duration metric: took 13.5781722s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:12:10.085021   14332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0514 01:12:10.104428   14332 ops.go:34] apiserver oom_adj: -16
	I0514 01:12:10.104428   14332 kubeadm.go:591] duration metric: took 25.8234985s to restartPrimaryControlPlane
	I0514 01:12:10.104428   14332 kubeadm.go:393] duration metric: took 25.9311771s to StartCluster
	I0514 01:12:10.104553   14332 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:12:10.104627   14332 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0514 01:12:10.110790   14332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0514 01:12:10.112124   14332 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.111.154 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0514 01:12:10.112124   14332 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0514 01:12:10.115777   14332 out.go:177] * Verifying Kubernetes components...
	I0514 01:12:10.112679   14332 config.go:182] Loaded profile config "pause-851700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:10.119666   14332 out.go:177] * Enabled addons: 
	I0514 01:12:10.128850   14332 addons.go:505] duration metric: took 16.8092ms for enable addons: enabled=[]
	I0514 01:12:10.134852   14332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:12:10.399514   14332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0514 01:12:10.441197   14332 node_ready.go:35] waiting up to 6m0s for node "pause-851700" to be "Ready" ...
	I0514 01:12:10.447177   14332 node_ready.go:49] node "pause-851700" has status "Ready":"True"
	I0514 01:12:10.447177   14332 node_ready.go:38] duration metric: took 5.9797ms for node "pause-851700" to be "Ready" ...
	I0514 01:12:10.447177   14332 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:12:10.457165   14332 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.861614   14332 pod_ready.go:92] pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:10.861669   14332 pod_ready.go:81] duration metric: took 404.4774ms for pod "coredns-7db6d8ff4d-ntqd5" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:10.861669   14332 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:11.254797   14332 pod_ready.go:92] pod "etcd-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:11.254797   14332 pod_ready.go:81] duration metric: took 393.1009ms for pod "etcd-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:11.254797   14332 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:11.654281   14332 pod_ready.go:92] pod "kube-apiserver-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:11.654281   14332 pod_ready.go:81] duration metric: took 399.4576ms for pod "kube-apiserver-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:11.654281   14332 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.049435   14332 pod_ready.go:92] pod "kube-controller-manager-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:12.049482   14332 pod_ready.go:81] duration metric: took 395.1748ms for pod "kube-controller-manager-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.049482   14332 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qgfs" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.460412   14332 pod_ready.go:92] pod "kube-proxy-8qgfs" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:12.460412   14332 pod_ready.go:81] duration metric: took 410.9019ms for pod "kube-proxy-8qgfs" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.460412   14332 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.852398   14332 pod_ready.go:92] pod "kube-scheduler-pause-851700" in "kube-system" namespace has status "Ready":"True"
	I0514 01:12:12.852398   14332 pod_ready.go:81] duration metric: took 391.9595ms for pod "kube-scheduler-pause-851700" in "kube-system" namespace to be "Ready" ...
	I0514 01:12:12.852398   14332 pod_ready.go:38] duration metric: took 2.4050595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0514 01:12:12.852398   14332 api_server.go:52] waiting for apiserver process to appear ...
	I0514 01:12:12.862409   14332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 01:12:12.892519   14332 api_server.go:72] duration metric: took 2.7801231s to wait for apiserver process to appear ...
	I0514 01:12:12.892587   14332 api_server.go:88] waiting for apiserver healthz status ...
	I0514 01:12:12.892651   14332 api_server.go:253] Checking apiserver healthz at https://172.23.111.154:8443/healthz ...
	I0514 01:12:12.906215   14332 api_server.go:279] https://172.23.111.154:8443/healthz returned 200:
	ok
	I0514 01:12:12.908487   14332 api_server.go:141] control plane version: v1.30.0
	I0514 01:12:12.908487   14332 api_server.go:131] duration metric: took 15.8351ms to wait for apiserver health ...
	I0514 01:12:12.908487   14332 system_pods.go:43] waiting for kube-system pods to appear ...
	I0514 01:12:13.069296   14332 system_pods.go:59] 6 kube-system pods found
	I0514 01:12:13.069337   14332 system_pods.go:61] "coredns-7db6d8ff4d-ntqd5" [10fdf7e7-0874-4abd-911e-88f6950f220a] Running
	I0514 01:12:13.069337   14332 system_pods.go:61] "etcd-pause-851700" [8f211517-c814-49ef-ac6c-f22b10e36b62] Running
	I0514 01:12:13.069337   14332 system_pods.go:61] "kube-apiserver-pause-851700" [7bd68de3-ee66-48ce-899b-a7be9c13339c] Running
	I0514 01:12:13.069337   14332 system_pods.go:61] "kube-controller-manager-pause-851700" [1dfabfcc-5216-403e-bc07-ca5f978e5435] Running
	I0514 01:12:13.069395   14332 system_pods.go:61] "kube-proxy-8qgfs" [0214f901-7bdf-4eab-81a1-5f041f2be6c5] Running
	I0514 01:12:13.069395   14332 system_pods.go:61] "kube-scheduler-pause-851700" [e1db2a1e-d04b-45ff-9ee0-f1fcf52b420f] Running
	I0514 01:12:13.069420   14332 system_pods.go:74] duration metric: took 160.8971ms to wait for pod list to return data ...
	I0514 01:12:13.069420   14332 default_sa.go:34] waiting for default service account to be created ...
	I0514 01:12:13.260374   14332 default_sa.go:45] found service account: "default"
	I0514 01:12:13.260374   14332 default_sa.go:55] duration metric: took 190.9414ms for default service account to be created ...
	I0514 01:12:13.260374   14332 system_pods.go:116] waiting for k8s-apps to be running ...
	I0514 01:12:13.453389   14332 system_pods.go:86] 6 kube-system pods found
	I0514 01:12:13.453389   14332 system_pods.go:89] "coredns-7db6d8ff4d-ntqd5" [10fdf7e7-0874-4abd-911e-88f6950f220a] Running
	I0514 01:12:13.453389   14332 system_pods.go:89] "etcd-pause-851700" [8f211517-c814-49ef-ac6c-f22b10e36b62] Running
	I0514 01:12:13.453389   14332 system_pods.go:89] "kube-apiserver-pause-851700" [7bd68de3-ee66-48ce-899b-a7be9c13339c] Running
	I0514 01:12:13.453389   14332 system_pods.go:89] "kube-controller-manager-pause-851700" [1dfabfcc-5216-403e-bc07-ca5f978e5435] Running
	I0514 01:12:13.453389   14332 system_pods.go:89] "kube-proxy-8qgfs" [0214f901-7bdf-4eab-81a1-5f041f2be6c5] Running
	I0514 01:12:13.453389   14332 system_pods.go:89] "kube-scheduler-pause-851700" [e1db2a1e-d04b-45ff-9ee0-f1fcf52b420f] Running
	I0514 01:12:13.453389   14332 system_pods.go:126] duration metric: took 193.0024ms to wait for k8s-apps to be running ...
	I0514 01:12:13.453389   14332 system_svc.go:44] waiting for kubelet service to be running ....
	I0514 01:12:13.467393   14332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 01:12:13.503917   14332 system_svc.go:56] duration metric: took 50.4869ms WaitForService to wait for kubelet
	I0514 01:12:13.503980   14332 kubeadm.go:576] duration metric: took 3.3915429s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0514 01:12:13.503980   14332 node_conditions.go:102] verifying NodePressure condition ...
	I0514 01:12:13.653212   14332 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0514 01:12:13.653212   14332 node_conditions.go:123] node cpu capacity is 2
	I0514 01:12:13.653758   14332 node_conditions.go:105] duration metric: took 149.7683ms to run NodePressure ...
	I0514 01:12:13.653758   14332 start.go:240] waiting for startup goroutines ...
	I0514 01:12:13.653819   14332 start.go:245] waiting for cluster config update ...
	I0514 01:12:13.653819   14332 start.go:254] writing updated cluster config ...
	I0514 01:12:13.669726   14332 ssh_runner.go:195] Run: rm -f paused
	I0514 01:12:13.810990   14332 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0514 01:12:13.815021   14332 out.go:177] * Done! kubectl is now configured to use "pause-851700" cluster and "default" namespace by default
	I0514 01:12:11.112581     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:12:11.112581     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:12.124000     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:14.548641     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:14.548641     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:14.548641     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:17.466878     744 main.go:141] libmachine: [stdout =====>] : 
	I0514 01:12:17.466878     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:18.480050     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:21.061046     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:21.061046     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:21.061177     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:23.998675     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:23.998675     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:23.998873     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:26.321136     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:26.321693     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:26.321693     744 machine.go:94] provisionDockerMachine start ...
	I0514 01:12:26.321900     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:28.686188     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:28.686188     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:28.686188     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:31.410236     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:31.410236     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:31.417276     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:12:31.417656     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:12:31.417656     744 main.go:141] libmachine: About to run SSH command:
	hostname
	I0514 01:12:31.562759     744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0514 01:12:31.562759     744 buildroot.go:166] provisioning hostname "calico-204600"
	I0514 01:12:31.562759     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:33.840269     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:33.840322     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:33.840322     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:36.568456     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:36.568590     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:36.574154     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:12:36.574907     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:12:36.574907     744 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-204600 && echo "calico-204600" | sudo tee /etc/hostname
	I0514 01:12:36.760819     744 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-204600
	
	I0514 01:12:36.760893     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:39.063025     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:39.063171     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:39.063227     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:41.800938     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:41.801004     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:41.805838     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:12:41.806432     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:12:41.806474     744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-204600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-204600/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-204600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0514 01:12:41.970277     744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0514 01:12:41.970277     744 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0514 01:12:41.970277     744 buildroot.go:174] setting up certificates
	I0514 01:12:41.970277     744 provision.go:84] configureAuth start
	I0514 01:12:41.970277     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:44.265055     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:44.265055     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:44.265177     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:47.173167     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:47.173296     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:47.173296     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:49.584844     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:49.584844     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:49.585006     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:52.363398     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:52.363503     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:52.363564     744 provision.go:143] copyHostCerts
	I0514 01:12:52.363979     744 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0514 01:12:52.363979     744 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0514 01:12:52.364668     744 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0514 01:12:52.366069     744 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0514 01:12:52.366069     744 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0514 01:12:52.366694     744 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0514 01:12:52.368163     744 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0514 01:12:52.368163     744 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0514 01:12:52.368423     744 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0514 01:12:52.370026     744 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-204600 san=[127.0.0.1 172.23.106.124 calico-204600 localhost minikube]
	I0514 01:12:52.555598     744 provision.go:177] copyRemoteCerts
	I0514 01:12:52.563590     744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0514 01:12:52.563590     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:12:54.840874     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:12:54.841271     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:54.841388     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:12:57.601993     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:12:57.602065     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:12:57.602065     744 sshutil.go:53] new ssh client: &{IP:172.23.106.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\id_rsa Username:docker}
	I0514 01:12:57.716833     744 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1528574s)
	I0514 01:12:57.716833     744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0514 01:12:57.770421     744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0514 01:12:57.821289     744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0514 01:12:57.868095     744 provision.go:87] duration metric: took 15.8967535s to configureAuth
	I0514 01:12:57.868095     744 buildroot.go:189] setting minikube options for container-runtime
	I0514 01:12:57.868830     744 config.go:182] Loaded profile config "calico-204600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 01:12:57.868830     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:00.208633     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:00.208681     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:00.208681     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:02.935580     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:02.935580     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:02.940289     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:13:02.940810     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:13:02.940912     744 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0514 01:13:03.085296     744 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0514 01:13:03.085296     744 buildroot.go:70] root file system type: tmpfs
	I0514 01:13:03.085296     744 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0514 01:13:03.085855     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:05.568098     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:05.568976     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:05.569060     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:08.375888     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:08.375888     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:08.380890     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:13:08.380890     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:13:08.380890     744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0514 01:13:08.554746     744 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0514 01:13:08.554746     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:10.869421     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:10.869811     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:10.869811     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:13.555500     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:13.555584     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:13.560037     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:13:13.560037     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:13:13.560037     744 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0514 01:13:15.896407     744 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0514 01:13:15.896958     744 machine.go:97] duration metric: took 49.5719458s to provisionDockerMachine
	I0514 01:13:15.896958     744 client.go:171] duration metric: took 2m4.325871s to LocalClient.Create
	I0514 01:13:15.897069     744 start.go:167] duration metric: took 2m4.3259815s to libmachine.API.Create "calico-204600"
	I0514 01:13:15.897116     744 start.go:293] postStartSetup for "calico-204600" (driver="hyperv")
	I0514 01:13:15.897116     744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0514 01:13:15.908389     744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0514 01:13:15.908389     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:18.226629     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:18.226629     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:18.226629     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:21.010984     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:21.010984     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:21.012175     744 sshutil.go:53] new ssh client: &{IP:172.23.106.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\id_rsa Username:docker}
	I0514 01:13:21.135651     744 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2269127s)
	I0514 01:13:21.150338     744 ssh_runner.go:195] Run: cat /etc/os-release
	I0514 01:13:21.158848     744 info.go:137] Remote host: Buildroot 2023.02.9
	I0514 01:13:21.158880     744 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0514 01:13:21.159129     744 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0514 01:13:21.159741     744 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem -> 59842.pem in /etc/ssl/certs
	I0514 01:13:21.175827     744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0514 01:13:21.204669     744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\59842.pem --> /etc/ssl/certs/59842.pem (1708 bytes)
	I0514 01:13:21.273079     744 start.go:296] duration metric: took 5.3756039s for postStartSetup
	I0514 01:13:21.275635     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:23.643699     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:23.643699     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:23.644073     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:26.441178     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:26.441178     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:26.441439     744 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\calico-204600\config.json ...
	I0514 01:13:26.447420     744 start.go:128] duration metric: took 2m14.8791324s to createHost
	I0514 01:13:26.447420     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:28.708361     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:28.708361     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:28.709177     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:31.138318     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:31.138488     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:31.144486     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:13:31.144920     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:13:31.145017     744 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0514 01:13:31.272075     744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715649211.513629082
	
	I0514 01:13:31.272198     744 fix.go:216] guest clock: 1715649211.513629082
	I0514 01:13:31.272198     744 fix.go:229] Guest: 2024-05-14 01:13:31.513629082 +0000 UTC Remote: 2024-05-14 01:13:26.44742 +0000 UTC m=+336.928096401 (delta=5.066209082s)
	I0514 01:13:31.272293     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:33.374327     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:33.374327     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:33.374727     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:35.889683     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:35.889683     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:35.893676     744 main.go:141] libmachine: Using SSH client type: native
	I0514 01:13:35.893676     744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcda3c0] 0xcdcfa0 <nil>  [] 0s} 172.23.106.124 22 <nil> <nil>}
	I0514 01:13:35.894277     744 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715649211
	I0514 01:13:36.041567     744 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 14 01:13:31 UTC 2024
	
	I0514 01:13:36.041567     744 fix.go:236] clock set: Tue May 14 01:13:31 UTC 2024
	 (err=<nil>)
	I0514 01:13:36.041567     744 start.go:83] releasing machines lock for "calico-204600", held for 2m24.4729404s
	I0514 01:13:36.041567     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:38.509741     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:38.509741     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:38.509741     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:41.277665     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:41.277665     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:41.281671     744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0514 01:13:41.281671     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:41.293670     744 ssh_runner.go:195] Run: cat /version.json
	I0514 01:13:41.293670     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-204600 ).state
	I0514 01:13:43.823002     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:43.823067     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:43.823067     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:43.824681     744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 01:13:43.825397     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:43.825918     744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-204600 ).networkadapters[0]).ipaddresses[0]
	I0514 01:13:46.763328     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:46.763383     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:46.763383     744 sshutil.go:53] new ssh client: &{IP:172.23.106.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\id_rsa Username:docker}
	I0514 01:13:46.794874     744 main.go:141] libmachine: [stdout =====>] : 172.23.106.124
	
	I0514 01:13:46.795056     744 main.go:141] libmachine: [stderr =====>] : 
	I0514 01:13:46.795267     744 sshutil.go:53] new ssh client: &{IP:172.23.106.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-204600\id_rsa Username:docker}
	I0514 01:13:46.870376     744 ssh_runner.go:235] Completed: cat /version.json: (5.5763335s)
	I0514 01:13:46.884667     744 ssh_runner.go:195] Run: systemctl --version
	I0514 01:13:46.948711     744 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.6666621s)
	I0514 01:13:46.963772     744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0514 01:13:46.977928     744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0514 01:13:46.991786     744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0514 01:13:47.029333     744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0514 01:13:47.029522     744 start.go:494] detecting cgroup driver to use...
	I0514 01:13:47.029825     744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:13:47.090633     744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0514 01:13:47.134947     744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0514 01:13:47.162743     744 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0514 01:13:47.177759     744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0514 01:13:47.213410     744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:13:47.254932     744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0514 01:13:47.286399     744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0514 01:13:47.326314     744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0514 01:13:47.364528     744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0514 01:13:47.394630     744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0514 01:13:47.434064     744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0514 01:13:47.475816     744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0514 01:13:47.505814     744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0514 01:13:47.542402     744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:13:47.769592     744 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0514 01:13:47.807245     744 start.go:494] detecting cgroup driver to use...
	I0514 01:13:47.822503     744 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0514 01:13:47.859693     744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:13:47.897136     744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0514 01:13:47.935653     744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0514 01:13:47.974848     744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:13:48.008273     744 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0514 01:13:48.069987     744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0514 01:13:48.096198     744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0514 01:13:48.143925     744 ssh_runner.go:195] Run: which cri-dockerd
	I0514 01:13:48.164185     744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0514 01:13:48.183794     744 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0514 01:13:48.227428     744 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0514 01:13:48.467477     744 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0514 01:13:48.676091     744 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0514 01:13:48.676395     744 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0514 01:13:48.721562     744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:13:48.933504     744 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0514 01:13:51.546592     744 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6129142s)
	I0514 01:13:51.556598     744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0514 01:13:51.607842     744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:13:51.649254     744 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0514 01:13:51.878375     744 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0514 01:13:52.083605     744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:13:52.386406     744 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0514 01:13:52.438744     744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0514 01:13:52.486152     744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:13:52.711229     744 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0514 01:13:52.845753     744 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0514 01:13:52.859753     744 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0514 01:13:52.868750     744 start.go:562] Will wait 60s for crictl version
	I0514 01:13:52.879752     744 ssh_runner.go:195] Run: which crictl
	I0514 01:13:52.898796     744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0514 01:13:52.953601     744 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0514 01:13:52.961567     744 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:13:53.006625     744 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0514 01:13:53.048188     744 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0514 01:13:53.049194     744 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0514 01:13:53.052190     744 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0514 01:13:53.052190     744 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0514 01:13:53.052190     744 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0514 01:13:53.052190     744 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:27:95:ed Flags:up|broadcast|multicast|running}
	I0514 01:13:53.055189     744 ip.go:210] interface addr: fe80::3ceb:68d:afab:af25/64
	I0514 01:13:53.055189     744 ip.go:210] interface addr: 172.23.96.1/20
	I0514 01:13:53.064190     744 ssh_runner.go:195] Run: grep 172.23.96.1	host.minikube.internal$ /etc/hosts
	I0514 01:13:53.072066     744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0514 01:13:53.095156     744 kubeadm.go:877] updating cluster {Name:calico-204600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-204600 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:172.23.106.124 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0514 01:13:53.095841     744 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0514 01:13:53.103458     744 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0514 01:13:53.124176     744 docker.go:685] Got preloaded images: 
	I0514 01:13:53.124176     744 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0514 01:13:53.137703     744 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0514 01:13:53.170651     744 ssh_runner.go:195] Run: which lz4
	I0514 01:13:53.192058     744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0514 01:13:53.199131     744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0514 01:13:53.199476     744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0514 01:13:55.082146     744 docker.go:649] duration metric: took 1.9046677s to copy over tarball
	I0514 01:13:55.092190     744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0514 01:14:03.125711     744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.0328825s)
	I0514 01:14:03.125793     744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0514 01:14:03.211625     744 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0514 01:14:03.239667     744 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0514 01:14:03.298061     744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0514 01:14:03.544402     744 ssh_runner.go:195] Run: sudo systemctl restart docker
	
	
	==> Docker <==
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.486563377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.530253190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.530782122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.531175445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.532962552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:55 pause-851700 cri-dockerd[5201]: time="2024-05-14T01:11:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8406ba2f2f82acab267a14fc1b7ac3ba3873ccaae88c257724b85d9e493c25e/resolv.conf as [nameserver 172.23.96.1]"
	May 14 01:11:55 pause-851700 cri-dockerd[5201]: time="2024-05-14T01:11:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8131d02210f06fa96210a39cecce86ace1b24a67c7d71638d01f441564d439e1/resolv.conf as [nameserver 172.23.96.1]"
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.959613794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.960017518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.960152526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:55 pause-851700 dockerd[4981]: time="2024-05-14T01:11:55.960399541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:56 pause-851700 dockerd[4981]: time="2024-05-14T01:11:56.326529119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 14 01:11:56 pause-851700 dockerd[4981]: time="2024-05-14T01:11:56.326712131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 14 01:11:56 pause-851700 dockerd[4981]: time="2024-05-14T01:11:56.327061353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:11:56 pause-851700 dockerd[4981]: time="2024-05-14T01:11:56.327515882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 14 01:12:44 pause-851700 cri-dockerd[5201]: time="2024-05-14T01:12:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 14 01:12:44 pause-851700 cri-dockerd[5201]: time="2024-05-14T01:12:44Z" level=error msg="Failed to retrieve checkpoint for sandbox 0546d4d0592055cd55dd68fabffd6504ae4a879eb41b1c0170214f0d5fcdcddc: checkpoint is not found"
	May 14 01:13:26 pause-851700 dockerd[4975]: 2024/05/14 01:13:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 01:13:26 pause-851700 dockerd[4975]: 2024/05/14 01:13:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 01:13:36 pause-851700 dockerd[4975]: 2024/05/14 01:13:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 01:13:36 pause-851700 dockerd[4975]: 2024/05/14 01:13:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 01:13:37 pause-851700 dockerd[4975]: 2024/05/14 01:13:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 01:13:37 pause-851700 dockerd[4975]: 2024/05/14 01:13:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 01:13:37 pause-851700 dockerd[4975]: 2024/05/14 01:13:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 14 01:13:37 pause-851700 dockerd[4975]: 2024/05/14 01:13:37 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a1ecdc98e3b06       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   8131d02210f06       coredns-7db6d8ff4d-ntqd5
	8b6f668b98e5c       a0bf559e280cf       2 minutes ago       Running             kube-proxy                2                   a8406ba2f2f82       kube-proxy-8qgfs
	f0158cf67f9e9       259c8277fcbbc       2 minutes ago       Running             kube-scheduler            2                   221fc404646e9       kube-scheduler-pause-851700
	66e920ff9a6f6       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   d339a10b09a1d       etcd-pause-851700
	040c2ded4465d       c42f13656d0b2       2 minutes ago       Running             kube-apiserver            2                   ea5b119d99b57       kube-apiserver-pause-851700
	eda66ff4e85fd       c7aad43836fa5       2 minutes ago       Running             kube-controller-manager   2                   72215b2606f06       kube-controller-manager-pause-851700
	49157b1b723fe       a0bf559e280cf       2 minutes ago       Created             kube-proxy                1                   798a552412b89       kube-proxy-8qgfs
	62549574b37b7       259c8277fcbbc       2 minutes ago       Created             kube-scheduler            1                   18eaec56489e6       kube-scheduler-pause-851700
	6088c2f87d781       c42f13656d0b2       2 minutes ago       Created             kube-apiserver            1                   f132fb594539d       kube-apiserver-pause-851700
	3aa29f1051a64       3861cfcd7c04c       2 minutes ago       Exited              etcd                      1                   f388a99b7b433       etcd-pause-851700
	07a402b65f7be       c7aad43836fa5       2 minutes ago       Exited              kube-controller-manager   1                   5e24fe2e11bcd       kube-controller-manager-pause-851700
	42e4b7e0c0f98       cbb01a7bd410d       9 minutes ago       Exited              coredns                   0                   d83b1ad1e1b80       coredns-7db6d8ff4d-ntqd5
	
	
	==> coredns [42e4b7e0c0f9] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[865714426]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-May-2024 01:04:52.815) (total time: 30000ms):
	Trace[865714426]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (01:05:22.815)
	Trace[865714426]: [30.0007496s] [30.0007496s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2016310029]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-May-2024 01:04:52.813) (total time: 30003ms):
	Trace[2016310029]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:05:22.815)
	Trace[2016310029]: [30.003514414s] [30.003514414s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1842595072]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-May-2024 01:04:52.813) (total time: 30004ms):
	Trace[1842595072]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:05:22.814)
	Trace[1842595072]: [30.004683995s] [30.004683995s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59119 - 9417 "HINFO IN 2341313173456037861.7315749896242332163. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04089532s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a1ecdc98e3b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = aa3c53a4fee7c79042020c4ad5abc53f615c90ace85c56ddcef4febd643c83c914a53a500e1bfe4eab6dd4f6a22b9d2014a8ba875b505ed10d3063ed95ac2ed3
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41921 - 57314 "HINFO IN 2819303177314173937.66606858249195375. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.049676848s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[  +0.217619] kauditd_printk_skb: 12 callbacks suppressed
	[May14 01:05] kauditd_printk_skb: 88 callbacks suppressed
	[May14 01:08] hrtimer: interrupt took 4429302 ns
	[May14 01:11] systemd-fstab-generator[4538]: Ignoring "noauto" option for root device
	[  +0.631914] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	[  +0.304771] systemd-fstab-generator[4586]: Ignoring "noauto" option for root device
	[  +0.330597] systemd-fstab-generator[4599]: Ignoring "noauto" option for root device
	[  +5.347338] kauditd_printk_skb: 87 callbacks suppressed
	[  +8.109134] systemd-fstab-generator[5150]: Ignoring "noauto" option for root device
	[  +0.235325] systemd-fstab-generator[5161]: Ignoring "noauto" option for root device
	[  +0.227992] systemd-fstab-generator[5173]: Ignoring "noauto" option for root device
	[  +0.316600] systemd-fstab-generator[5188]: Ignoring "noauto" option for root device
	[  +0.967342] systemd-fstab-generator[5345]: Ignoring "noauto" option for root device
	[  +0.368495] kauditd_printk_skb: 140 callbacks suppressed
	[  +4.423256] systemd-fstab-generator[6188]: Ignoring "noauto" option for root device
	[  +1.337849] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.832931] kauditd_printk_skb: 30 callbacks suppressed
	[May14 01:12] kauditd_printk_skb: 19 callbacks suppressed
	[  +3.583456] systemd-fstab-generator[7151]: Ignoring "noauto" option for root device
	[ +12.246180] systemd-fstab-generator[7231]: Ignoring "noauto" option for root device
	[  +0.162952] kauditd_printk_skb: 14 callbacks suppressed
	[ +21.400915] systemd-fstab-generator[7513]: Ignoring "noauto" option for root device
	[  +0.180512] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.264812] systemd-fstab-generator[7610]: Ignoring "noauto" option for root device
	[  +0.168635] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3aa29f1051a6] <==
	{"level":"info","ts":"2024-05-14T01:11:44.850126Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"26.367094ms"}
	{"level":"info","ts":"2024-05-14T01:11:44.885272Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-05-14T01:11:44.931913Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"5c0ab8bdc2f3a27e","local-member-id":"2c597cdbed357cb1","commit-index":640}
	{"level":"info","ts":"2024-05-14T01:11:44.932095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2c597cdbed357cb1 switched to configuration voters=()"}
	{"level":"info","ts":"2024-05-14T01:11:44.932122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2c597cdbed357cb1 became follower at term 2"}
	{"level":"info","ts":"2024-05-14T01:11:44.932135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2c597cdbed357cb1 [peers: [], term: 2, commit: 640, applied: 0, lastindex: 640, lastterm: 2]"}
	{"level":"warn","ts":"2024-05-14T01:11:44.942697Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-05-14T01:11:44.971466Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":550}
	{"level":"info","ts":"2024-05-14T01:11:44.98838Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-05-14T01:11:45.002895Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2c597cdbed357cb1","timeout":"7s"}
	{"level":"info","ts":"2024-05-14T01:11:45.003313Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2c597cdbed357cb1"}
	{"level":"info","ts":"2024-05-14T01:11:45.003379Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"2c597cdbed357cb1","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-05-14T01:11:45.005761Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-14T01:11:45.0059Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-14T01:11:45.005931Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-14T01:11:45.00594Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-14T01:11:45.006282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2c597cdbed357cb1 switched to configuration voters=(3195722694615465137)"}
	{"level":"info","ts":"2024-05-14T01:11:45.006333Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5c0ab8bdc2f3a27e","local-member-id":"2c597cdbed357cb1","added-peer-id":"2c597cdbed357cb1","added-peer-peer-urls":["https://172.23.111.154:2380"]}
	{"level":"info","ts":"2024-05-14T01:11:45.006427Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5c0ab8bdc2f3a27e","local-member-id":"2c597cdbed357cb1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T01:11:45.006453Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T01:11:45.019921Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-14T01:11:45.020298Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2c597cdbed357cb1","initial-advertise-peer-urls":["https://172.23.111.154:2380"],"listen-peer-urls":["https://172.23.111.154:2380"],"advertise-client-urls":["https://172.23.111.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.111.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-14T01:11:45.020375Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-14T01:11:45.020489Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.111.154:2380"}
	{"level":"info","ts":"2024-05-14T01:11:45.020508Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.111.154:2380"}
	
	
	==> etcd [66e920ff9a6f] <==
	{"level":"info","ts":"2024-05-14T01:11:51.98058Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-14T01:11:51.982918Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.111.154:2379"}
	{"level":"info","ts":"2024-05-14T01:11:51.986866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-14T01:11:51.987059Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-14T01:11:54.22352Z","caller":"traceutil/trace.go:171","msg":"trace[228720461] linearizableReadLoop","detail":"{readStateIndex:643; appliedIndex:642; }","duration":"117.599852ms","start":"2024-05-14T01:11:54.105903Z","end":"2024-05-14T01:11:54.223503Z","steps":["trace[228720461] 'read index received'  (duration: 117.418841ms)","trace[228720461] 'applied index is now lower than readState.Index'  (duration: 180.511µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-14T01:11:54.224037Z","caller":"traceutil/trace.go:171","msg":"trace[959813969] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"123.14508ms","start":"2024-05-14T01:11:54.100878Z","end":"2024-05-14T01:11:54.224024Z","steps":["trace[959813969] 'process raft request'  (duration: 122.486741ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:54.224459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.533108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"info","ts":"2024-05-14T01:11:54.22457Z","caller":"traceutil/trace.go:171","msg":"trace[1864620139] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:551; }","duration":"118.677216ms","start":"2024-05-14T01:11:54.105883Z","end":"2024-05-14T01:11:54.22456Z","steps":["trace[1864620139] 'agreement among raft nodes before linearized reading'  (duration: 118.522707ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:54.234265Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.410578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" ","response":"range_response_count:2 size:1908"}
	{"level":"info","ts":"2024-05-14T01:11:54.235066Z","caller":"traceutil/trace.go:171","msg":"trace[969980280] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:552; }","duration":"122.243727ms","start":"2024-05-14T01:11:54.112811Z","end":"2024-05-14T01:11:54.235055Z","steps":["trace[969980280] 'agreement among raft nodes before linearized reading'  (duration: 121.295771ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-14T01:11:54.234714Z","caller":"traceutil/trace.go:171","msg":"trace[1617278336] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"128.73031ms","start":"2024-05-14T01:11:54.10597Z","end":"2024-05-14T01:11:54.234701Z","steps":["trace[1617278336] 'process raft request'  (duration: 127.961965ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:54.23475Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.843039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.23.111.154\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-05-14T01:11:54.235249Z","caller":"traceutil/trace.go:171","msg":"trace[965550647] range","detail":"{range_begin:/registry/masterleases/172.23.111.154; range_end:; response_count:1; response_revision:552; }","duration":"104.37267ms","start":"2024-05-14T01:11:54.130867Z","end":"2024-05-14T01:11:54.23524Z","steps":["trace[965550647] 'agreement among raft nodes before linearized reading'  (duration: 103.841439ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-14T01:11:57.811419Z","caller":"traceutil/trace.go:171","msg":"trace[1487128947] linearizableReadLoop","detail":"{readStateIndex:675; appliedIndex:674; }","duration":"228.045578ms","start":"2024-05-14T01:11:57.583356Z","end":"2024-05-14T01:11:57.811401Z","steps":["trace[1487128947] 'read index received'  (duration: 139.814777ms)","trace[1487128947] 'applied index is now lower than readState.Index'  (duration: 88.229901ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-14T01:11:57.811725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.354198ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ntqd5\" ","response":"range_response_count:1 size:4966"}
	{"level":"info","ts":"2024-05-14T01:11:57.811455Z","caller":"traceutil/trace.go:171","msg":"trace[1399960450] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"347.086635ms","start":"2024-05-14T01:11:57.464346Z","end":"2024-05-14T01:11:57.811432Z","steps":["trace[1399960450] 'process raft request'  (duration: 258.883936ms)","trace[1399960450] 'compare'  (duration: 87.685567ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-14T01:11:57.816274Z","caller":"traceutil/trace.go:171","msg":"trace[1194517682] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-ntqd5; range_end:; response_count:1; response_revision:566; }","duration":"232.034631ms","start":"2024-05-14T01:11:57.583327Z","end":"2024-05-14T01:11:57.815362Z","steps":["trace[1194517682] 'agreement among raft nodes before linearized reading'  (duration: 228.251291ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:57.818468Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-14T01:11:57.464329Z","time spent":"351.626024ms","remote":"127.0.0.1:49744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":616,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-851700.17cf35c60b33d78b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-851700.17cf35c60b33d78b\" value_size:544 lease:8985120462776342394 >> failure:<>"}
	{"level":"warn","ts":"2024-05-14T01:11:58.259961Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.786469ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8985120462776342400 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-851700.17cf35c6118f5b07\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-851700.17cf35c6118f5b07\" value_size:598 lease:8985120462776342394 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-05-14T01:11:58.26101Z","caller":"traceutil/trace.go:171","msg":"trace[1232228386] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"438.154247ms","start":"2024-05-14T01:11:57.822835Z","end":"2024-05-14T01:11:58.260989Z","steps":["trace[1232228386] 'process raft request'  (duration: 192.275909ms)","trace[1232228386] 'compare'  (duration: 244.392943ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-14T01:11:58.261844Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-14T01:11:57.822822Z","time spent":"438.946797ms","remote":"127.0.0.1:49744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":670,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-851700.17cf35c6118f5b07\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-851700.17cf35c6118f5b07\" value_size:598 lease:8985120462776342394 >> failure:<>"}
	{"level":"info","ts":"2024-05-14T01:11:58.264035Z","caller":"traceutil/trace.go:171","msg":"trace[2003042396] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"438.844291ms","start":"2024-05-14T01:11:57.825178Z","end":"2024-05-14T01:11:58.264022Z","steps":["trace[2003042396] 'process raft request'  (duration: 434.877239ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:58.264617Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-14T01:11:57.825163Z","time spent":"439.097507ms","remote":"127.0.0.1:49850","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5066,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ntqd5\" mod_revision:557 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ntqd5\" value_size:5007 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ntqd5\" > >"}
	{"level":"info","ts":"2024-05-14T01:11:58.719099Z","caller":"traceutil/trace.go:171","msg":"trace[1912588650] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"356.724687ms","start":"2024-05-14T01:11:58.362356Z","end":"2024-05-14T01:11:58.719081Z","steps":["trace[1912588650] 'process raft request'  (duration: 356.320762ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-14T01:11:58.719483Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-14T01:11:58.362339Z","time spent":"357.058608ms","remote":"127.0.0.1:49744","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-851700.17cf35c6118f80ec\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-851700.17cf35c6118f80ec\" value_size:592 lease:8985120462776342394 >> failure:<>"}
	
	
	==> kernel <==
	 01:14:23 up 11 min,  0 users,  load average: 0.08, 0.31, 0.24
	Linux pause-851700 5.10.207 #1 SMP Thu May 9 02:07:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [040c2ded4465] <==
	I0514 01:11:54.074702       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0514 01:11:54.074741       1 policy_source.go:224] refreshing policies
	I0514 01:11:54.096975       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 01:11:54.124370       1 shared_informer.go:320] Caches are synced for configmaps
	I0514 01:11:54.126150       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0514 01:11:54.126569       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0514 01:11:54.127363       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 01:11:54.128210       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0514 01:11:54.134062       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0514 01:11:54.135818       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 01:11:54.142671       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0514 01:11:54.142951       1 aggregator.go:165] initial CRD sync complete...
	I0514 01:11:54.143242       1 autoregister_controller.go:141] Starting autoregister controller
	I0514 01:11:54.143266       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0514 01:11:54.143430       1 cache.go:39] Caches are synced for autoregister controller
	I0514 01:11:54.168057       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0514 01:11:54.258415       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0514 01:11:54.975202       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0514 01:11:56.467054       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0514 01:11:56.522045       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0514 01:11:56.633781       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0514 01:11:56.714218       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 01:11:56.727779       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0514 01:12:06.856401       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 01:12:06.939490       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [6088c2f87d78] <==
	
	
	==> kube-controller-manager [07a402b65f7b] <==
	
	
	==> kube-controller-manager [eda66ff4e85f] <==
	I0514 01:12:06.906288       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0514 01:12:06.911190       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0514 01:12:06.915539       1 shared_informer.go:320] Caches are synced for namespace
	I0514 01:12:06.918544       1 shared_informer.go:320] Caches are synced for attach detach
	I0514 01:12:06.921611       1 shared_informer.go:320] Caches are synced for node
	I0514 01:12:06.921840       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0514 01:12:06.922060       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0514 01:12:06.922270       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0514 01:12:06.922419       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0514 01:12:06.927178       1 shared_informer.go:320] Caches are synced for TTL
	I0514 01:12:06.927263       1 shared_informer.go:320] Caches are synced for endpoint
	I0514 01:12:06.927275       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0514 01:12:06.930206       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0514 01:12:06.942424       1 shared_informer.go:320] Caches are synced for crt configmap
	I0514 01:12:07.035703       1 shared_informer.go:320] Caches are synced for disruption
	I0514 01:12:07.059785       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0514 01:12:07.060894       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 01:12:07.105711       1 shared_informer.go:320] Caches are synced for resource quota
	I0514 01:12:07.141262       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0514 01:12:07.156749       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0514 01:12:07.157006       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0514 01:12:07.157930       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0514 01:12:07.540788       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 01:12:07.569005       1 shared_informer.go:320] Caches are synced for garbage collector
	I0514 01:12:07.569087       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [49157b1b723f] <==
	
	
	==> kube-proxy [8b6f668b98e5] <==
	I0514 01:11:56.348108       1 server_linux.go:69] "Using iptables proxy"
	I0514 01:11:56.376262       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.111.154"]
	I0514 01:11:56.446080       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0514 01:11:56.446125       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0514 01:11:56.446145       1 server_linux.go:165] "Using iptables Proxier"
	I0514 01:11:56.452533       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0514 01:11:56.453003       1 server.go:872] "Version info" version="v1.30.0"
	I0514 01:11:56.453315       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 01:11:56.455072       1 config.go:192] "Starting service config controller"
	I0514 01:11:56.455233       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0514 01:11:56.455390       1 config.go:101] "Starting endpoint slice config controller"
	I0514 01:11:56.456877       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0514 01:11:56.456349       1 config.go:319] "Starting node config controller"
	I0514 01:11:56.462422       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0514 01:11:56.556800       1 shared_informer.go:320] Caches are synced for service config
	I0514 01:11:56.563145       1 shared_informer.go:320] Caches are synced for node config
	I0514 01:11:56.563246       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [62549574b37b] <==
	
	
	==> kube-scheduler [f0158cf67f9e] <==
	I0514 01:11:51.916051       1 serving.go:380] Generated self-signed cert in-memory
	W0514 01:11:54.027065       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0514 01:11:54.027472       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0514 01:11:54.027697       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0514 01:11:54.027819       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0514 01:11:54.108344       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0514 01:11:54.108660       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 01:11:54.114506       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0514 01:11:54.114549       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0514 01:11:54.118103       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0514 01:11:54.118162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0514 01:11:54.216025       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.150526    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/feb1bef467bbb676eeced6d36d10658b-etcd-data\") pod \"etcd-pause-851700\" (UID: \"feb1bef467bbb676eeced6d36d10658b\") " pod="kube-system/etcd-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.150638    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee963cc10a9538d2029afc43406ef6e0-k8s-certs\") pod \"kube-apiserver-pause-851700\" (UID: \"ee963cc10a9538d2029afc43406ef6e0\") " pod="kube-system/kube-apiserver-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.150806    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee963cc10a9538d2029afc43406ef6e0-usr-share-ca-certificates\") pod \"kube-apiserver-pause-851700\" (UID: \"ee963cc10a9538d2029afc43406ef6e0\") " pod="kube-system/kube-apiserver-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.151043    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3eeefc2a42d7fe4e61d2d6a0aba0d1e-kubeconfig\") pod \"kube-controller-manager-pause-851700\" (UID: \"c3eeefc2a42d7fe4e61d2d6a0aba0d1e\") " pod="kube-system/kube-controller-manager-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.181071    7520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e24fe2e11bcd28f48b8ba98586ba8383b12ab8d148660f799c2a70771b0fa9d"
	May 14 01:12:45 pause-851700 kubelet[7520]: E0514 01:12:45.201274    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-851700\" already exists" pod="kube-system/etcd-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.209066    7520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d83b1ad1e1b80c0a2d70e0f025e883f7a1bb91f92ff2ea3f0bff8e555ca9ef90"
	May 14 01:12:45 pause-851700 kubelet[7520]: E0514 01:12:45.223771    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-851700\" already exists" pod="kube-system/kube-controller-manager-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: E0514 01:12:45.223826    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-851700\" already exists" pod="kube-system/kube-apiserver-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.228658    7520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18eaec56489e6b1d47561c044caa199305444897a9985c4fd6d20b9608c84c4d"
	May 14 01:12:45 pause-851700 kubelet[7520]: E0514 01:12:45.245519    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-851700\" already exists" pod="kube-system/kube-scheduler-pause-851700"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.255326    7520 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798a552412b896cd3e2b5c1492dfab1a53a4e23bf19b970c9934a9a962bc6dea"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.742920    7520 apiserver.go:52] "Watching apiserver"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.749089    7520 topology_manager.go:215] "Topology Admit Handler" podUID="10fdf7e7-0874-4abd-911e-88f6950f220a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ntqd5"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.749539    7520 topology_manager.go:215] "Topology Admit Handler" podUID="0214f901-7bdf-4eab-81a1-5f041f2be6c5" podNamespace="kube-system" podName="kube-proxy-8qgfs"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.761170    7520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.856728    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0214f901-7bdf-4eab-81a1-5f041f2be6c5-xtables-lock\") pod \"kube-proxy-8qgfs\" (UID: \"0214f901-7bdf-4eab-81a1-5f041f2be6c5\") " pod="kube-system/kube-proxy-8qgfs"
	May 14 01:12:45 pause-851700 kubelet[7520]: I0514 01:12:45.856825    7520 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0214f901-7bdf-4eab-81a1-5f041f2be6c5-lib-modules\") pod \"kube-proxy-8qgfs\" (UID: \"0214f901-7bdf-4eab-81a1-5f041f2be6c5\") " pod="kube-system/kube-proxy-8qgfs"
	May 14 01:12:46 pause-851700 kubelet[7520]: E0514 01:12:46.301112    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-851700\" already exists" pod="kube-system/kube-apiserver-pause-851700"
	May 14 01:12:46 pause-851700 kubelet[7520]: E0514 01:12:46.302072    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-851700\" already exists" pod="kube-system/kube-controller-manager-pause-851700"
	May 14 01:12:46 pause-851700 kubelet[7520]: E0514 01:12:46.303109    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-851700\" already exists" pod="kube-system/etcd-pause-851700"
	May 14 01:12:46 pause-851700 kubelet[7520]: E0514 01:12:46.303663    7520 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-851700\" already exists" pod="kube-system/kube-scheduler-pause-851700"
	May 14 01:12:52 pause-851700 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	May 14 01:12:52 pause-851700 systemd[1]: kubelet.service: Deactivated successfully.
	May 14 01:12:52 pause-851700 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 01:14:04.744619   11092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-851700 -n pause-851700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-851700 -n pause-851700: exit status 2 (12.9847063s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 01:14:24.954496   10168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-851700" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestPause/serial/DeletePaused (104.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10800.514s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-204600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pvl74" [940bd40d-3818-4f7d-99f6-fb686dea7fbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (38m49s)
	TestNetworkPlugins/group/calico (13m57s)
	TestNetworkPlugins/group/custom-flannel (6m19s)
	TestNetworkPlugins/group/enable-default-cni (3m6s)
	TestNetworkPlugins/group/enable-default-cni/Start (3m6s)
	TestNetworkPlugins/group/false (5m41s)
	TestNetworkPlugins/group/false/NetCatPod (1s)
	TestStartStop (23m54s)

                                                
                                                
goroutine 2905 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 14 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00099e9c0, 0xc001159bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000008870, {0x4654a40, 0x2a, 0x2a}, {0x231d8a3?, 0x16806f?, 0x4677cc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0008108c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0008108c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 12 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00052a200)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2764 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc000757080, 0xc0014d19e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2761
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2447 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009b7400, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2445
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2055 [chan receive, 39 minutes]:
testing.(*T).Run(0xc0009c8000, {0x22c1d11?, 0x11f48d?}, 0xc0018a4078)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0009c8000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0009c8000, 0x2d33c58)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 16 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 26
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 981 [chan send, 148 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c3c420, 0xc0019e35c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 802
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2873 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00064ba90, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1db8fa0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0006fa0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00064bac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001dae4b0, {0x3289640, 0xc001638870}, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001dae4b0, 0x3b9aca00, 0x0, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2849
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2623 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00123e9c0, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2621
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2622 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001267f20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2621
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2744 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0006fade0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2740
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 137 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0006fa4e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 147
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 138 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008d6b00, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 147
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 151 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0008d6a50, 0x3c)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1db8fa0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0006fa3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008d6b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000066240, {0x3289640, 0xc0008c8090}, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000066240, 0x3b9aca00, 0x0, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 138
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 152 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x32ad080, 0xc0000543c0}, 0xc00130df50, 0xc00130df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x32ad080, 0xc0000543c0}, 0x0?, 0xc00130df50, 0xc00130df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x32ad080?, 0xc0000543c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 138
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 153 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 152
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2874 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x32ad080, 0xc0000543c0}, 0xc001313f50, 0xc001313f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x32ad080, 0xc0000543c0}, 0xb0?, 0xc001313f50, 0xc001313f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x32ad080?, 0xc0000543c0?}, 0xc001313fb0?, 0x646528?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x6464db?, 0xc0012dcd80?, 0xc000640720?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2849
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 776 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001364180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 763
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2106 [chan receive, 6 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0008ef860, 0xc0018a4078)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2055
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2110 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0009c8340, {0x22c1d16?, 0x32821d8?}, 0xc001208b10)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0009c8340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0009c8340, 0xc000070380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2106
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2848 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0006fa300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2745 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0013ec840, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2740
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2108 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0004a1450)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008efba0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008efba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008efba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008efba0, 0xc000070280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2106
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2332 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008d7700, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2352
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2761 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ffa3cb54de0?, {0xc00125bbd0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x4c4, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000a1ec90)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000757080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000757080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc00099f6c0, 0xc000757080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc00099f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc00099f6c0, 0xc001208b10)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2110
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 814 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 813
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2602 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x32ad080, 0xc0000543c0}, 0xc000739f50, 0xc000739f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x32ad080, 0xc0000543c0}, 0x90?, 0xc000739f50, 0xc000739f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x32ad080?, 0xc0000543c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000739fd0?, 0x23e404?, 0xc000739fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2623
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2411 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009b73d0, 0xd)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1db8fa0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001417260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009b7400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004fafd0, {0x3289640, 0xc001449290}, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004fafd0, 0x3b9aca00, 0x0, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2447
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2769 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0013ec810, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1db8fa0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0006facc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0013ec840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001a96440, {0x3289640, 0xc001638180}, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001a96440, 0x3b9aca00, 0x0, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2745
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2147 [syscall, locked to thread]:
syscall.SyscallN(0x7ffa3cb54de0?, {0xc0012ad108?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x0?, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x7fc, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000a1e600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001220420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001220420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc001220420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:1012 +0x85
k8s.io/minikube/test/integration.debugLogs(0xc0009c9380, {0xc000495860, 0xd})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:650 +0xb9e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0009c9380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xbcc
testing.tRunner(0xc0009c9380, 0xc000071100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2106
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 677 [IO wait, 164 minutes]:
internal/poll.runtime_pollWait(0x181e971c220, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xbfdd6?, 0x4705120?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0008c71a0, 0xc001b17bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0008c7188, 0x2e0, {0xc00081c000?, 0x0?, 0x0?}, 0xc000620008?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0008c7188, 0xc001b17d90)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0008c7188)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc00062cfe0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00062cfe0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0004720f0, {0x32a0120, 0xc00062cfe0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0004720f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0004f7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 630
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2295 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc0004a1450)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001996680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001996680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001996680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001996680, 0xc000211cc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2289
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2413 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2412
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2786 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x32ad080, 0xc0000543c0}, 0xc001d8bf50, 0xc001d8bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x32ad080, 0xc0000543c0}, 0xa0?, 0xc001d8bf50, 0xc001d8bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x32ad080?, 0xc0000543c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001d8bfd0?, 0x23e404?, 0xc0015bd4f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2745
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 813 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x32ad080, 0xc0000543c0}, 0xc001229f50, 0xc001229f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x32ad080, 0xc0000543c0}, 0x90?, 0xc001229f50, 0xc001229f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x32ad080?, 0xc0000543c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001229fd0?, 0x23e404?, 0xc0018f80c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 777
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 812 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001692b50, 0x36)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1db8fa0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001364060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001692b80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000111810, {0x3289640, 0xc000a25b90}, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000111810, 0x3b9aca00, 0x0, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 777
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2109 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0004a1450)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008efd40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008efd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008efd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008efd40, 0xc000070300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2106
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2331 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00193b1a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2352
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2875 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2874
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 777 [chan receive, 154 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001692b80, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 763
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2087 [chan receive, 25 minutes]:
testing.(*T).Run(0xc0009c89c0, {0x22c1d11?, 0x1f7333?}, 0x2d33e78)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0009c89c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0009c89c0, 0x2d33ca0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2601 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00123e990, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1db8fa0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001267e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00123e9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001462060, {0x3289640, 0xc001152450}, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001462060, 0x3b9aca00, 0x0, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2623
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2446 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001417380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2445
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2762 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x240?, {0xc001b15b20?, 0xc7ea5?, 0x4705120?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4625567?, 0xc001b15b80?, 0xbfdd6?, 0x4705120?, 0xc001b15c08?, 0xb281b?, 0xa8ba6?, 0x67?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5b0, {0xc001862a4f?, 0x5b1, 0x16417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc001336288?, {0xc001862a4f?, 0xec1be?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001336288, {0xc001862a4f, 0x5b1, 0x5b1})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000448a78, {0xc001862a4f?, 0xc001b15d98?, 0x210?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001208c00, {0x3288200, 0xc0006289a8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3288340, 0xc001208c00}, {0x3288200, 0xc0006289a8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3288340, 0xc001208c00})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4608b60?, {0x3288340?, 0xc001208c00?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3288340, 0xc001208c00}, {0x32882c0, 0xc000448a78}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001daeba0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2761
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2318 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0008d76d0, 0x10)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1db8fa0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00193b080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008d7700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001a96000, {0x3289640, 0xc001448cc0}, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001a96000, 0x3b9aca00, 0x0, 0x1, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2332
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2146 [syscall, locked to thread]:
syscall.SyscallN(0x7ffa3cb54de0?, {0xc001593108?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x0?, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x520, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001a89fb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001220160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001220160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc001220160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:1012 +0x85
k8s.io/minikube/test/integration.debugLogs(0xc0009c91e0, {0xc001702048, 0x15})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:447 +0x4f75
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0009c91e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xbcc
testing.tRunner(0xc0009c91e0, 0xc000070f80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2106
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2111 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0004a1450)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009c8680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009c8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0009c8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0009c8680, 0xc000070400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2106
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1205 [chan send, 150 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018c8420, 0xc0018f90e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1204
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2412 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x32ad080, 0xc0000543c0}, 0xc0016f7f50, 0xc0016f7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x32ad080, 0xc0000543c0}, 0xa0?, 0xc0016f7f50, 0xc0016f7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x32ad080?, 0xc0000543c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0016f7fd0?, 0x23e404?, 0xc0015581e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2447
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2763 [syscall, locked to thread]:
syscall.SyscallN(0x181e97208b8?, {0xc0012d5b20?, 0xc7ea5?, 0x4?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x181e97208b8?, 0xc0012d5b80?, 0xbfdd6?, 0x4705120?, 0xc0012d5c08?, 0xb2985?, 0x181e3dc0eb8?, 0x8000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5b4, {0xc0015fca51?, 0x15af, 0x16417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc001336c88?, {0xc0015fca51?, 0xec1be?, 0x8000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001336c88, {0xc0015fca51, 0x15af, 0x15af})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000448af0, {0xc0015fca51?, 0xc0012d5d98?, 0x3e23?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001208c30, {0x3288200, 0xc000448b30})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3288340, 0xc001208c30}, {0x3288200, 0xc000448b30}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3288340, 0xc001208c30})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4608b60?, {0x3288340?, 0xc001208c30?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3288340, 0xc001208c30}, {0x32882c0, 0xc000448af0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x2d33b70?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2761
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2289 [chan receive, 25 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0009c9a00, 0x2d33e78)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2087
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2113 [chan receive]:
testing.(*T).Run(0xc0009c8b60, {0x22ca83a?, 0x32821d8?}, 0xc0019a2420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0009c8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:148 +0x88b
testing.tRunner(0xc0009c8b60, 0xc000070500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2106
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2293 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc0004a1450)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001996340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001996340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001996340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001996340, 0xc000211c00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2289
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2872 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x32acec0, 0xc00093c9a0}, {0x32a0750, 0xc000127bc0}, 0x1, 0x0, 0xc0014d9be0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x32acec0?, 0xc000780460?}, 0x3b9aca00, 0xc0014d9dd8?, 0x1, 0xc0014d9be0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x32acec0, 0xc000780460}, 0xc001996ea0, {0xc000495810, 0xc}, {0x22c5b3b, 0x7}, {0x22cc7d9, 0xa}, 0xd18c2e2800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc001996ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:163 +0x3c5
testing.tRunner(0xc001996ea0, 0xc0019a2420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2113
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2320 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2319
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2290 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc0004a1450)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009c9ba0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009c9ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0009c9ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0009c9ba0, 0xc000211780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2289
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2815 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc001915b20?, 0xc7ea5?, 0x4705120?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xb2c4d?, 0xc001915b80?, 0xbfdd6?, 0x4705120?, 0xc001915c08?, 0xb281b?, 0xa8ba6?, 0xc001cb4041?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7d0, {0xc001248d3a?, 0x2c6, 0xc001248c00?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0008c6f08?, {0xc001248d3a?, 0xec171?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0008c6f08, {0xc001248d3a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000926128, {0xc001248d3a?, 0x181e9412768?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00183fd70, {0x3288200, 0xc0009a63f8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3288340, 0xc00183fd70}, {0x3288200, 0xc0009a63f8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc001915e78?, {0x3288340, 0xc00183fd70})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4608b60?, {0x3288340?, 0xc00183fd70?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3288340, 0xc00183fd70}, {0x32882c0, 0xc000926128}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0015f4780?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2146
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2294 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc0004a1450)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0019964e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019964e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0019964e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0019964e0, 0xc000211c40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2289
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2603 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2602
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2902 [IO wait]:
internal/poll.runtime_pollWait(0x181e971c128, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xa427525c31d644c9?, 0x1bd8e8d6dc0057be?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc00066cf20, 0x2d34850)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).Read(0xc00066cf08, {0xc00141c000, 0x2000, 0x2000})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:436 +0x2b1
net.(*netFD).Read(0xc00066cf08, {0xc00141c000?, 0xc0001583c0?, 0x2?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0009a6528, {0xc00141c000?, 0xc00141d166?, 0x1a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc0015e0ed0, {0xc00141c000?, 0x0?, 0xc0015e0ed0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0016cdb30, {0x3289d80, 0xc0015e0ed0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0016cd888, {0x181e971c470, 0xc000767740}, 0xc0007e3980?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0016cd888, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0016cd888, {0xc0013c0000, 0x1000, 0xc000585dc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0008f1c20, {0xc0007d6f20, 0x9, 0x4604130?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x32883e0, 0xc0008f1c20}, {0xc0007d6f20, 0x9, 0x9}, 0x9)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:335 +0x90
io.ReadFull(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0007d6f20, 0x9, 0xc0007e3dc0?}, {0x32883e0?, 0xc0008f1c20?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0007d6ee0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0007e3fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2442 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0001ff200)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:2338 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 2901
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.25.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 2319 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x32ad080, 0xc0000543c0}, 0xc00169df50, 0xc00169df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x32ad080, 0xc0000543c0}, 0x60?, 0xc00169df50, 0xc00169df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x32ad080?, 0xc0000543c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x23e3a5?, 0xc000198160?, 0xc000054f60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2332
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2291 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc0004a1450)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009c9d40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009c9d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0009c9d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0009c9d40, 0xc000211b80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2289
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2292 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc0004a1450)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0019961a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019961a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0019961a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0019961a0, 0xc000211bc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2289
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2787 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2786
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2816 [syscall, locked to thread]:
syscall.SyscallN(0x181e95b6dd0?, {0xc001587b20?, 0xc7ea5?, 0x4?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x181e95b6dd0?, 0xc001587b80?, 0xbfdd6?, 0x4705120?, 0xc001587c08?, 0xb281b?, 0xa8ba6?, 0x8000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x450, {0xc00124913a?, 0x2c6, 0xc001249000?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0008c7908?, {0xc00124913a?, 0xe5170?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0008c7908, {0xc00124913a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000926198, {0xc00124913a?, 0xc001587d98?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00183fdd0, {0x3288200, 0xc0009a6410})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3288340, 0xc00183fdd0}, {0x3288200, 0xc0009a6410}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3288340, 0xc00183fdd0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4608b60?, {0x3288340?, 0xc00183fdd0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3288340, 0xc00183fdd0}, {0x32882c0, 0xc000926198}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001c85c80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2147
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2849 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00064bac0, 0xc0000543c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                    

Test pass (165/210)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.26
4 TestDownloadOnly/v1.20.0/preload-exists 0.06
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.22
9 TestDownloadOnly/v1.20.0/DeleteAll 1.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.19
12 TestDownloadOnly/v1.30.0/json-events 10.58
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.23
18 TestDownloadOnly/v1.30.0/DeleteAll 1.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 1.09
21 TestBinaryMirror 6.43
22 TestOffline 406.64
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.22
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
27 TestAddons/Setup 365.7
30 TestAddons/parallel/Ingress 61.18
31 TestAddons/parallel/InspektorGadget 24
32 TestAddons/parallel/MetricsServer 18.94
33 TestAddons/parallel/HelmTiller 32.3
35 TestAddons/parallel/CSI 89.84
36 TestAddons/parallel/Headlamp 32.18
37 TestAddons/parallel/CloudSpanner 20.78
38 TestAddons/parallel/LocalPath 80.97
39 TestAddons/parallel/NvidiaDevicePlugin 19.47
40 TestAddons/parallel/Yakd 5.02
43 TestAddons/serial/GCPAuth/Namespaces 0.29
44 TestAddons/StoppedEnableDisable 50.06
45 TestCertOptions 521.79
46 TestCertExpiration 796.42
47 TestDockerFlags 283.37
48 TestForceSystemdFlag 232.71
49 TestForceSystemdEnv 336.93
56 TestErrorSpam/start 15.63
57 TestErrorSpam/status 33.05
58 TestErrorSpam/pause 20.48
59 TestErrorSpam/unpause 20.54
60 TestErrorSpam/stop 56.86
63 TestFunctional/serial/CopySyncFile 0.03
64 TestFunctional/serial/StartWithProxy 185.87
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 113.68
67 TestFunctional/serial/KubeContext 0.11
68 TestFunctional/serial/KubectlGetPods 0.18
71 TestFunctional/serial/CacheCmd/cache/add_remote 22.78
72 TestFunctional/serial/CacheCmd/cache/add_local 9.79
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.21
74 TestFunctional/serial/CacheCmd/cache/list 0.21
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.05
76 TestFunctional/serial/CacheCmd/cache/cache_reload 31.31
77 TestFunctional/serial/CacheCmd/cache/delete 0.4
78 TestFunctional/serial/MinikubeKubectlCmd 0.41
80 TestFunctional/serial/ExtraConfig 116.35
81 TestFunctional/serial/ComponentHealth 0.15
82 TestFunctional/serial/LogsCmd 7.22
83 TestFunctional/serial/LogsFileCmd 9.03
84 TestFunctional/serial/InvalidService 18.72
90 TestFunctional/parallel/StatusCmd 36.23
94 TestFunctional/parallel/ServiceCmdConnect 24.45
95 TestFunctional/parallel/AddonsCmd 0.58
96 TestFunctional/parallel/PersistentVolumeClaim 43.22
98 TestFunctional/parallel/SSHCmd 18.67
99 TestFunctional/parallel/CpCmd 50.56
100 TestFunctional/parallel/MySQL 59.67
101 TestFunctional/parallel/FileSync 9.03
102 TestFunctional/parallel/CertSync 54.46
106 TestFunctional/parallel/NodeLabels 0.27
108 TestFunctional/parallel/NonActiveRuntimeDisabled 9.23
110 TestFunctional/parallel/License 2.32
111 TestFunctional/parallel/ServiceCmd/DeployApp 18.38
112 TestFunctional/parallel/ProfileCmd/profile_not_create 9.32
113 TestFunctional/parallel/ProfileCmd/profile_list 9.36
114 TestFunctional/parallel/ServiceCmd/List 12.02
115 TestFunctional/parallel/ProfileCmd/profile_json_output 9.61
116 TestFunctional/parallel/ServiceCmd/JSONOutput 11.52
118 TestFunctional/parallel/Version/short 0.18
119 TestFunctional/parallel/Version/components 7.17
121 TestFunctional/parallel/ImageCommands/ImageListShort 6.79
122 TestFunctional/parallel/ImageCommands/ImageListTable 6.67
123 TestFunctional/parallel/ImageCommands/ImageListJson 6.76
124 TestFunctional/parallel/ImageCommands/ImageListYaml 6.86
125 TestFunctional/parallel/ImageCommands/ImageBuild 23.93
126 TestFunctional/parallel/ImageCommands/Setup 3.97
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 21.21
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 17.35
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 24.63
131 TestFunctional/parallel/DockerEnv/powershell 37.92
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.97
133 TestFunctional/parallel/ImageCommands/ImageRemove 13.74
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 16.95
135 TestFunctional/parallel/UpdateContextCmd/no_changes 2.22
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.41
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.17
139 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.14
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.61
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 39.55
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/delete_addon-resizer_images 0.41
151 TestFunctional/delete_my-image_image 0.15
152 TestFunctional/delete_minikube_cached_images 0.15
156 TestMultiControlPlane/serial/StartCluster 646.18
157 TestMultiControlPlane/serial/DeployApp 12.31
159 TestMultiControlPlane/serial/AddWorkerNode 222.82
160 TestMultiControlPlane/serial/NodeLabels 0.16
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 25.38
162 TestMultiControlPlane/serial/CopyFile 555.62
163 TestMultiControlPlane/serial/StopSecondaryNode 67.73
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 18.96
168 TestImageBuild/serial/Setup 176.78
169 TestImageBuild/serial/NormalBuild 8.78
170 TestImageBuild/serial/BuildWithBuildArg 7.75
171 TestImageBuild/serial/BuildWithDockerIgnore 6.64
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 6.45
176 TestJSONOutput/start/Command 222
177 TestJSONOutput/start/Audit 0
179 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/pause/Command 7.05
183 TestJSONOutput/pause/Audit 0
185 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/unpause/Command 6.88
189 TestJSONOutput/unpause/Audit 0
191 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/stop/Command 37.76
195 TestJSONOutput/stop/Audit 0
197 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
199 TestErrorJSONOutput 1.19
204 TestMainNoArgs 0.19
205 TestMinikubeProfile 484.01
208 TestMountStart/serial/StartWithMountFirst 136.27
209 TestMountStart/serial/VerifyMountFirst 8.3
210 TestMountStart/serial/StartWithMountSecond 135.27
211 TestMountStart/serial/VerifyMountSecond 8.54
212 TestMountStart/serial/DeleteFirst 25.13
213 TestMountStart/serial/VerifyMountPostDelete 8.26
214 TestMountStart/serial/Stop 26.2
215 TestMountStart/serial/RestartStopped 102.93
216 TestMountStart/serial/VerifyMountPostStop 8.32
219 TestMultiNode/serial/FreshStart2Nodes 381.24
220 TestMultiNode/serial/DeployApp2Nodes 8.07
222 TestMultiNode/serial/AddNode 205.87
223 TestMultiNode/serial/MultiNodeLabels 0.16
224 TestMultiNode/serial/ProfileList 10.53
225 TestMultiNode/serial/CopyFile 311.9
226 TestMultiNode/serial/StopNode 67.01
227 TestMultiNode/serial/StartAfterStop 161.99
233 TestPreload 449.81
234 TestScheduledStopWindows 302.26
239 TestRunningBinaryUpgrade 814.76
241 TestKubernetesUpgrade 1140.4
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.31
257 TestStoppedBinaryUpgrade/Setup 0.95
258 TestStoppedBinaryUpgrade/Upgrade 767.16
259 TestStoppedBinaryUpgrade/MinikubeLogs 8.71
268 TestPause/serial/Start 447.97
271 TestPause/serial/SecondStartNoReconfiguration 400.9
284 TestPause/serial/Pause 9.09
285 TestPause/serial/VerifyStatus 13
286 TestPause/serial/Unpause 8.36
287 TestPause/serial/PauseAgain 8.64
x
+
TestDownloadOnly/v1.20.0/json-events (16.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-977400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-977400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (16.2565469s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-977400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-977400: exit status 85 (215.3606ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-977400 | minikube5\jenkins | v1.33.1 | 13 May 24 22:21 UTC |          |
	|         | -p download-only-977400        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 22:21:46
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 22:21:46.433362    2112 out.go:291] Setting OutFile to fd 628 ...
	I0513 22:21:46.433803    2112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:21:46.433803    2112 out.go:304] Setting ErrFile to fd 632...
	I0513 22:21:46.433803    2112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0513 22:21:46.452188    2112 root.go:314] Error reading config file at C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0513 22:21:46.463396    2112 out.go:298] Setting JSON to true
	I0513 22:21:46.465396    2112 start.go:129] hostinfo: {"hostname":"minikube5","uptime":470,"bootTime":1715638436,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0513 22:21:46.465396    2112 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:21:46.471577    2112 out.go:97] [download-only-977400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:21:46.471577    2112 notify.go:220] Checking for updates...
	I0513 22:21:46.474890    2112 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	W0513 22:21:46.472577    2112 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0513 22:21:46.480418    2112 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0513 22:21:46.482591    2112 out.go:169] MINIKUBE_LOCATION=18872
	I0513 22:21:46.485560    2112 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0513 22:21:46.490027    2112 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0513 22:21:46.490607    2112 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:21:51.056665    2112 out.go:97] Using the hyperv driver based on user configuration
	I0513 22:21:51.056665    2112 start.go:297] selected driver: hyperv
	I0513 22:21:51.056665    2112 start.go:901] validating driver "hyperv" against <nil>
	I0513 22:21:51.056665    2112 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 22:21:51.100343    2112 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0513 22:21:51.100980    2112 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0513 22:21:51.100980    2112 cni.go:84] Creating CNI manager for ""
	I0513 22:21:51.101504    2112 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0513 22:21:51.101650    2112 start.go:340] cluster config:
	{Name:download-only-977400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-977400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:21:51.102244    2112 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 22:21:51.103890    2112 out.go:97] Downloading VM boot image ...
	I0513 22:21:51.103890    2112 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-amd64.iso
	I0513 22:21:55.091074    2112 out.go:97] Starting "download-only-977400" primary control-plane node in "download-only-977400" cluster
	I0513 22:21:55.091999    2112 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 22:21:55.132259    2112 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0513 22:21:55.132259    2112 cache.go:56] Caching tarball of preloaded images
	I0513 22:21:55.133312    2112 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 22:21:55.136096    2112 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0513 22:21:55.136096    2112 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0513 22:21:55.198826    2112 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0513 22:21:59.177500    2112 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0513 22:21:59.177998    2112 preload.go:255] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0513 22:22:00.101977    2112 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0513 22:22:00.102990    2112 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-977400\config.json ...
	I0513 22:22:00.103683    2112 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-977400\config.json: {Name:mk813400836d5a40ecc3f3994a60ed5d07f497c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:22:00.104059    2112 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 22:22:00.106069    2112 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-977400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-977400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:22:02.706712    3948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2105149s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-977400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-977400: (1.1902184s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (10.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-676200 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-676200 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv: (10.5749166s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (10.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-676200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-676200: exit status 85 (228.8746ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-977400 | minikube5\jenkins | v1.33.1 | 13 May 24 22:21 UTC |                     |
	|         | -p download-only-977400        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:22 UTC |
	| delete  | -p download-only-977400        | download-only-977400 | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC | 13 May 24 22:22 UTC |
	| start   | -o=json --download-only        | download-only-676200 | minikube5\jenkins | v1.33.1 | 13 May 24 22:22 UTC |                     |
	|         | -p download-only-676200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 22:22:05
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 22:22:05.366831    2604 out.go:291] Setting OutFile to fd 732 ...
	I0513 22:22:05.367096    2604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:22:05.367096    2604 out.go:304] Setting ErrFile to fd 748...
	I0513 22:22:05.367602    2604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:22:05.385778    2604 out.go:298] Setting JSON to true
	I0513 22:22:05.388872    2604 start.go:129] hostinfo: {"hostname":"minikube5","uptime":489,"bootTime":1715638436,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0513 22:22:05.388872    2604 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:22:05.394211    2604 out.go:97] [download-only-676200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:22:05.394211    2604 notify.go:220] Checking for updates...
	I0513 22:22:05.396667    2604 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:22:05.399735    2604 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0513 22:22:05.403142    2604 out.go:169] MINIKUBE_LOCATION=18872
	I0513 22:22:05.409041    2604 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0513 22:22:05.417092    2604 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0513 22:22:05.417826    2604 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 22:22:10.209878    2604 out.go:97] Using the hyperv driver based on user configuration
	I0513 22:22:10.209878    2604 start.go:297] selected driver: hyperv
	I0513 22:22:10.209878    2604 start.go:901] validating driver "hyperv" against <nil>
	I0513 22:22:10.210639    2604 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 22:22:10.251682    2604 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0513 22:22:10.252961    2604 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0513 22:22:10.252961    2604 cni.go:84] Creating CNI manager for ""
	I0513 22:22:10.252961    2604 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 22:22:10.252961    2604 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 22:22:10.252961    2604 start.go:340] cluster config:
	{Name:download-only-676200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-676200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 22:22:10.253491    2604 iso.go:125] acquiring lock: {Name:mkcecbdb7e30e5a0901160a859f9d5b65d250c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 22:22:10.256315    2604 out.go:97] Starting "download-only-676200" primary control-plane node in "download-only-676200" cluster
	I0513 22:22:10.256315    2604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:22:10.298365    2604 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 22:22:10.298365    2604 cache.go:56] Caching tarball of preloaded images
	I0513 22:22:10.299663    2604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:22:10.302575    2604 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0513 22:22:10.302575    2604 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0513 22:22:10.374972    2604 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0513 22:22:13.778466    2604 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0513 22:22:13.778645    2604 preload.go:255] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0513 22:22:14.596845    2604 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 22:22:14.597774    2604 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-676200\config.json ...
	I0513 22:22:14.598544    2604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-676200\config.json: {Name:mk0ce4f11f9d1d532022c333828ce466c05d5e07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 22:22:14.599347    2604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 22:22:14.599799    2604 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.30.0/kubectl.exe
	
	
	* The control-plane node download-only-676200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-676200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:22:15.894589    8336 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (1.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1434678s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (1.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-676200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-676200: (1.0918584s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.09s)

                                                
                                    
x
+
TestBinaryMirror (6.43s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-583000 --alsologtostderr --binary-mirror http://127.0.0.1:49580 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-583000 --alsologtostderr --binary-mirror http://127.0.0.1:49580 --driver=hyperv: (5.7117484s)
helpers_test.go:175: Cleaning up "binary-mirror-583000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-583000
--- PASS: TestBinaryMirror (6.43s)

                                                
                                    
x
+
TestOffline (406.64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-554700 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-554700 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (5m53.487796s)
helpers_test.go:175: Cleaning up "offline-docker-554700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-554700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-554700: (53.1492267s)
--- PASS: TestOffline (406.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-596400
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-596400: exit status 85 (224.2502ms)

                                                
                                                
-- stdout --
	* Profile "addons-596400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-596400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:22:26.878050    6880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-596400
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-596400: exit status 85 (212.8572ms)

                                                
                                                
-- stdout --
	* Profile "addons-596400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-596400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:22:26.870050    5784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (365.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-596400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-596400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m5.6987863s)
--- PASS: TestAddons/Setup (365.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (61.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-596400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-596400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-596400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [24616639-f9bf-45ac-b776-b62b9ab57a46] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [24616639-f9bf-45ac-b776-b62b9ab57a46] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.0159215s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.0475779s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-596400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0513 22:29:57.773034    9332 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-596400 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 ip: (2.4576866s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.23.108.148
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 addons disable ingress-dns --alsologtostderr -v=1: (15.5442387s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 addons disable ingress --alsologtostderr -v=1: (20.2296263s)
--- PASS: TestAddons/parallel/Ingress (61.18s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6jp6z" [303f48b9-4487-4bcc-bd05-5595a7d68af2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0307544s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-596400
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-596400: (18.9667519s)
--- PASS: TestAddons/parallel/InspektorGadget (24.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (18.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 8.4263ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-frvlc" [87c284e1-b9da-4469-9782-6e5398fe5e53] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0206343s
addons_test.go:415: (dbg) Run:  kubectl --context addons-596400 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 addons disable metrics-server --alsologtostderr -v=1: (13.7052907s)
--- PASS: TestAddons/parallel/MetricsServer (18.94s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (32.3s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 6.1178ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-42bzp" [7042d870-4da9-4a65-a930-9f9106c3bd9f] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.0074999s
addons_test.go:473: (dbg) Run:  kubectl --context addons-596400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-596400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.9771095s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 addons disable helm-tiller --alsologtostderr -v=1: (14.2982627s)
--- PASS: TestAddons/parallel/HelmTiller (32.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (89.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 30.6122ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-596400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-596400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7ff2d0f6-bfef-4313-b53d-b85bb7d1725a] Pending
helpers_test.go:344: "task-pv-pod" [7ff2d0f6-bfef-4313-b53d-b85bb7d1725a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7ff2d0f6-bfef-4313-b53d-b85bb7d1725a] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.0145251s
addons_test.go:584: (dbg) Run:  kubectl --context addons-596400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-596400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-596400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-596400 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-596400 delete pod task-pv-pod: (1.4247933s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-596400 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-596400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-596400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e027aa01-7ec7-464d-99e4-1a7324b0a40c] Pending
helpers_test.go:344: "task-pv-pod-restore" [e027aa01-7ec7-464d-99e4-1a7324b0a40c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e027aa01-7ec7-464d-99e4-1a7324b0a40c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0190585s
addons_test.go:626: (dbg) Run:  kubectl --context addons-596400 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-596400 delete pod task-pv-pod-restore: (1.4447841s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-596400 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-596400 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (20.1649937s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 addons disable volumesnapshots --alsologtostderr -v=1: (13.926856s)
--- PASS: TestAddons/parallel/CSI (89.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (32.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-596400 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-596400 --alsologtostderr -v=1: (15.1667474s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-mbqnr" [9475d21a-6302-4077-81d5-678051da2ebe] Pending
helpers_test.go:344: "headlamp-68456f997b-mbqnr" [9475d21a-6302-4077-81d5-678051da2ebe] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-mbqnr" [9475d21a-6302-4077-81d5-678051da2ebe] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.0121271s
--- PASS: TestAddons/parallel/Headlamp (32.18s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (20.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-prmql" [da4011b4-6438-4dd1-80a2-aea8d33c4aa7] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0082286s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-596400
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-596400: (14.7450487s)
--- PASS: TestAddons/parallel/CloudSpanner (20.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (80.97s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-596400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-596400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-596400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fb308cc5-9377-4c43-905c-2a3e17ec3adb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fb308cc5-9377-4c43-905c-2a3e17ec3adb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fb308cc5-9377-4c43-905c-2a3e17ec3adb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0095296s
addons_test.go:891: (dbg) Run:  kubectl --context addons-596400 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 ssh "cat /opt/local-path-provisioner/pvc-da9206f4-c917-4595-b5c0-874e94c44c3c_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 ssh "cat /opt/local-path-provisioner/pvc-da9206f4-c917-4595-b5c0-874e94c44c3c_default_test-pvc/file1": (9.4524876s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-596400 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-596400 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-596400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-596400 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (58.9170034s)
--- PASS: TestAddons/parallel/LocalPath (80.97s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (19.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cnb25" [2f28fdcf-4cae-4c84-accf-f89da529ccf0] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0154745s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-596400
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-596400: (14.4557643s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (19.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-8hpfz" [a67c075f-9178-4adc-9f49-3481c8d70cd6] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0135617s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-596400 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-596400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (50.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-596400
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-596400: (38.5626584s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-596400
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-596400: (4.7671434s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-596400
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-596400: (4.3408356s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-596400
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-596400: (2.3926848s)
--- PASS: TestAddons/StoppedEnableDisable (50.06s)

                                                
                                    
x
+
TestCertOptions (521.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-047900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-047900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m45.1983937s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-047900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-047900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.0663908s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-047900 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-047900 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-047900 -- "sudo cat /etc/kubernetes/admin.conf": (8.7486602s)
helpers_test.go:175: Cleaning up "cert-options-047900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-047900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-047900: (38.6392181s)
--- PASS: TestCertOptions (521.79s)

                                                
                                    
x
+
TestCertExpiration (796.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-982200 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-982200 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m34.874298s)
E0514 00:52:50.968445    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0514 00:53:33.159230    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-982200 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-982200 --memory=2048 --cert-expiration=8760h --driver=hyperv: (2m56.6034382s)
helpers_test.go:175: Cleaning up "cert-expiration-982200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-982200
E0514 00:58:33.191777    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-982200: (44.9130607s)
--- PASS: TestCertExpiration (796.42s)

                                                
                                    
x
+
TestDockerFlags (283.37s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-885700 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-885700 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (3m40.4728989s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-885700 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-885700 ssh "sudo systemctl show docker --property=Environment --no-pager": (8.9005007s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-885700 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-885700 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.1465492s)
helpers_test.go:175: Cleaning up "docker-flags-885700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-885700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-885700: (44.8474135s)
--- PASS: TestDockerFlags (283.37s)

                                                
                                    
x
+
TestForceSystemdFlag (232.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-650500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-650500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (2m57.9846431s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-650500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-650500 ssh "docker info --format {{.CgroupDriver}}": (8.8522856s)
helpers_test.go:175: Cleaning up "force-systemd-flag-650500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-650500
E0514 00:42:50.921598    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-650500: (45.8593424s)
--- PASS: TestForceSystemdFlag (232.71s)

                                                
                                    
x
+
TestForceSystemdEnv (336.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-966200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-966200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (4m48.559237s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-966200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-966200 ssh "docker info --format {{.CgroupDriver}}": (9.1062518s)
helpers_test.go:175: Cleaning up "force-systemd-env-966200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-966200
E0514 01:02:50.998025    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0514 01:03:16.450936    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-966200: (39.2591964s)
--- PASS: TestForceSystemdEnv (336.93s)

                                                
                                    
x
+
TestErrorSpam/start (15.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 start --dry-run: (5.1408438s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 start --dry-run: (5.2553896s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 start --dry-run: (5.2271566s)
--- PASS: TestErrorSpam/start (15.63s)

                                                
                                    
x
+
TestErrorSpam/status (33.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 status
E0513 22:36:16.689341    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 status: (11.4592806s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 status: (10.8490053s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 status: (10.735335s)
--- PASS: TestErrorSpam/status (33.05s)

                                                
                                    
x
+
TestErrorSpam/pause (20.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 pause: (7.0509665s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 pause: (6.696169s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 pause: (6.7278003s)
--- PASS: TestErrorSpam/pause (20.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (20.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 unpause: (6.8970065s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 unpause: (6.8278006s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 unpause: (6.8086772s)
--- PASS: TestErrorSpam/unpause (20.54s)

                                                
                                    
x
+
TestErrorSpam/stop (56.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 stop: (37.1125067s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 stop: (10.0132725s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-457100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-457100 stop: (9.727794s)
--- PASS: TestErrorSpam/stop (56.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\5984\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (185.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-129600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0513 22:39:00.546810    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-129600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m5.8584209s)
--- PASS: TestFunctional/serial/StartWithProxy (185.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (113.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-129600 --alsologtostderr -v=8
E0513 22:43:32.745492    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-129600 --alsologtostderr -v=8: (1m53.6811001s)
functional_test.go:659: soft start took 1m53.6826434s for "functional-129600" cluster.
--- PASS: TestFunctional/serial/SoftStart (113.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.11s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-129600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (22.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 cache add registry.k8s.io/pause:3.1: (7.6642893s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 cache add registry.k8s.io/pause:3.3: (7.5454158s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 cache add registry.k8s.io/pause:latest: (7.5605213s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (22.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-129600 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1241286799\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-129600 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1241286799\001: (2.1104132s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 cache add minikube-local-cache-test:functional-129600
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 cache add minikube-local-cache-test:functional-129600: (7.2974081s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 cache delete minikube-local-cache-test:functional-129600
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-129600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh sudo crictl images: (8.0478129s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (31.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.0478639s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-129600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.1062736s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:44:25.569940   13716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 cache reload: (7.1150095s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.0222765s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (31.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 kubectl -- --context functional-129600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (116.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-129600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-129600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m56.3507754s)
functional_test.go:757: restart took 1m56.3546872s for "functional-129600" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (116.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-129600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 logs: (7.2068218s)
--- PASS: TestFunctional/serial/LogsCmd (7.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (9.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2693371949\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2693371949\001\logs.txt: (9.015211s)
--- PASS: TestFunctional/serial/LogsFileCmd (9.03s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (18.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-129600 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-129600
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-129600: exit status 115 (14.2132729s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.23.102.96:31110 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:47:35.226881   12592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_service_d27a1c5599baa2f8050d003f41b0266333639286_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-129600 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-129600 delete -f testdata\invalidsvc.yaml: (1.1629127s)
--- PASS: TestFunctional/serial/InvalidService (18.72s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (36.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 status: (11.3941357s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (12.7736315s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 status -o json: (12.0539191s)
--- PASS: TestFunctional/parallel/StatusCmd (36.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-129600 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-129600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-kbrkd" [47cc6f65-79e4-42bc-99e5-2146c60c0637] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-kbrkd" [47cc6f65-79e4-42bc-99e5-2146c60c0637] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.009134s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 service hello-node-connect --url: (16.0481512s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.23.102.96:31177
functional_test.go:1671: http://172.23.102.96:31177: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-kbrkd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.23.102.96:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.23.102.96:31177
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.45s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1bab2554-ed75-4ec0-a1a0-bff155677696] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0124811s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-129600 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-129600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-129600 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-129600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9316f995-72bc-4336-b6d0-86fde0f069d1] Pending
helpers_test.go:344: "sp-pod" [9316f995-72bc-4336-b6d0-86fde0f069d1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9316f995-72bc-4336-b6d0-86fde0f069d1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.0056324s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-129600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-129600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-129600 delete -f testdata/storage-provisioner/pod.yaml: (1.364097s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-129600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f02f3318-48a9-40a0-bd16-ae5afd295ee1] Pending
helpers_test.go:344: "sp-pod" [f02f3318-48a9-40a0-bd16-ae5afd295ee1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f02f3318-48a9-40a0-bd16-ae5afd295ee1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0168513s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-129600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (18.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh "echo hello": (9.9868058s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh "cat /etc/hostname": (8.6738757s)
--- PASS: TestFunctional/parallel/SSHCmd (18.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (50.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 cp testdata\cp-test.txt /home/docker/cp-test.txt: (7.0538016s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh -n functional-129600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh -n functional-129600 "sudo cat /home/docker/cp-test.txt": (8.9521627s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 cp functional-129600:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd1933250433\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 cp functional-129600:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd1933250433\001\cp-test.txt: (9.3452454s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh -n functional-129600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh -n functional-129600 "sudo cat /home/docker/cp-test.txt": (8.930819s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (6.9695082s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh -n functional-129600 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh -n functional-129600 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.2804218s)
--- PASS: TestFunctional/parallel/CpCmd (50.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (59.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-129600 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-57w2c" [1b8f066c-5a64-46d3-af0c-5e674bbd6c7a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-57w2c" [1b8f066c-5a64-46d3-af0c-5e674bbd6c7a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 45.0162799s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;": exit status 1 (232.8998ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;": exit status 1 (244.6022ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;": exit status 1 (276.2902ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;": exit status 1 (275.9596ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;": exit status 1 (294.7338ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-129600 exec mysql-64454c8b5c-57w2c -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (59.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/5984/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /etc/test/nested/copy/5984/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /etc/test/nested/copy/5984/hosts": (9.0291651s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.03s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (54.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/5984.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /etc/ssl/certs/5984.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /etc/ssl/certs/5984.pem": (9.342949s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/5984.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /usr/share/ca-certificates/5984.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /usr/share/ca-certificates/5984.pem": (8.7379876s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /etc/ssl/certs/51391683.0": (9.2674384s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/59842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /etc/ssl/certs/59842.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /etc/ssl/certs/59842.pem": (9.0665247s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/59842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /usr/share/ca-certificates/59842.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /usr/share/ca-certificates/59842.pem": (8.9771484s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.0456343s)
--- PASS: TestFunctional/parallel/CertSync (54.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-129600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-129600 ssh "sudo systemctl is-active crio": exit status 1 (9.2290898s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:48:46.215149    6356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (2.2948368s)
--- PASS: TestFunctional/parallel/License (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-129600 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-129600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-jbc52" [563edb96-b0b7-475b-a891-d9907b7aa9c8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-jbc52" [563edb96-b0b7-475b-a891-d9907b7aa9c8] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.0141711s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (8.9538443s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (9.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (9.1372094s)
functional_test.go:1311: Took "9.1447987s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "213.0843ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (9.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (12.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 service list: (12.0232136s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (12.02s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (9.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (9.4119066s)
functional_test.go:1362: Took "9.4181483s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "195.1931ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (9.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (11.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 service list -o json: (11.5200723s)
functional_test.go:1490: Took "11.5201595s" to run "out/minikube-windows-amd64.exe -p functional-129600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (11.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 version -o=json --components: (7.1678298s)
--- PASS: TestFunctional/parallel/Version/components (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (6.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image ls --format short --alsologtostderr: (6.7885037s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-129600 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-129600
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-129600
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-129600 image ls --format short --alsologtostderr:
W0513 22:51:06.053628    1720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 22:51:06.104621    1720 out.go:291] Setting OutFile to fd 700 ...
I0513 22:51:06.105619    1720 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:51:06.105619    1720 out.go:304] Setting ErrFile to fd 988...
I0513 22:51:06.105619    1720 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:51:06.118629    1720 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:51:06.118629    1720 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:51:06.119635    1720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
I0513 22:51:08.091365    1720 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 22:51:08.091365    1720 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:08.100518    1720 ssh_runner.go:195] Run: systemctl --version
I0513 22:51:08.100518    1720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
I0513 22:51:10.045662    1720 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 22:51:10.045662    1720 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:10.046035    1720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
I0513 22:51:12.431374    1720 main.go:141] libmachine: [stdout =====>] : 172.23.102.96

                                                
                                                
I0513 22:51:12.431517    1720 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:12.431900    1720 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
I0513 22:51:12.532756    1720 ssh_runner.go:235] Completed: systemctl --version: (4.4319908s)
I0513 22:51:12.539166    1720 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (6.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image ls --format table --alsologtostderr: (6.6724053s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-129600 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | latest            | 1d668e06f1e53 | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-129600 | 6649de5472171 | 30B    |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | 501d84f5d0648 | 48.3MB |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/google-containers/addon-resizer      | functional-129600 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-129600 image ls --format table --alsologtostderr:
W0513 22:51:19.604676    5484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 22:51:19.679048    5484 out.go:291] Setting OutFile to fd 764 ...
I0513 22:51:19.679876    5484 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:51:19.679876    5484 out.go:304] Setting ErrFile to fd 708...
I0513 22:51:19.679876    5484 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:51:19.692351    5484 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:51:19.692964    5484 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:51:19.693720    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
I0513 22:51:21.617013    5484 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 22:51:21.617013    5484 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:21.630001    5484 ssh_runner.go:195] Run: systemctl --version
I0513 22:51:21.630001    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
I0513 22:51:23.600537    5484 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 22:51:23.600537    5484 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:23.601204    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
I0513 22:51:25.980849    5484 main.go:141] libmachine: [stdout =====>] : 172.23.102.96

                                                
                                                
I0513 22:51:25.981289    5484 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:25.981780    5484 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
I0513 22:51:26.088219    5484 ssh_runner.go:235] Completed: systemctl --version: (4.4579941s)
I0513 22:51:26.095373    5484 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (6.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image ls --format json --alsologtostderr: (6.7558994s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-129600 image ls --format json --alsologtostderr:
[{"id":"501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-129600"],"size":"32900000"},{"id":"6649de54721714db84153876ee04299ea77028a660daaf524292f
1a0577f5812","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-129600"],"size":"30"},{"id":"1d668e06f1e534ab338404ba891c37d618dd53c9073dcdd4ebde82aa7643f83f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[]
,"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-129600 image ls --format json --alsologtostderr:
W0513 22:51:12.847427   14252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 22:51:12.903040   14252 out.go:291] Setting OutFile to fd 700 ...
I0513 22:51:12.903040   14252 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:51:12.903040   14252 out.go:304] Setting ErrFile to fd 988...
I0513 22:51:12.903040   14252 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:51:12.917631   14252 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:51:12.918044   14252 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:51:12.918920   14252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
I0513 22:51:14.951873   14252 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 22:51:14.951981   14252 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:14.960696   14252 ssh_runner.go:195] Run: systemctl --version
I0513 22:51:14.961697   14252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
I0513 22:51:16.994148   14252 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 22:51:16.994238   14252 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:16.994318   14252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
I0513 22:51:19.355406   14252 main.go:141] libmachine: [stdout =====>] : 172.23.102.96

                                                
                                                
I0513 22:51:19.355980   14252 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:19.355980   14252 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
I0513 22:51:19.454945   14252 ssh_runner.go:235] Completed: systemctl --version: (4.4929157s)
I0513 22:51:19.462521   14252 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (6.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (6.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image ls --format yaml --alsologtostderr: (6.8628024s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-129600 image ls --format yaml --alsologtostderr:
- id: 6649de54721714db84153876ee04299ea77028a660daaf524292f1a0577f5812
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-129600
size: "30"
- id: 1d668e06f1e534ab338404ba891c37d618dd53c9073dcdd4ebde82aa7643f83f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-129600
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-129600 image ls --format yaml --alsologtostderr:
W0513 22:51:24.742955    2720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 22:51:24.807483    2720 out.go:291] Setting OutFile to fd 860 ...
I0513 22:51:24.822556    2720 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:51:24.822556    2720 out.go:304] Setting ErrFile to fd 920...
I0513 22:51:24.822642    2720 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:51:24.834859    2720 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:51:24.835964    2720 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:51:24.836229    2720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
I0513 22:51:26.889034    2720 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 22:51:26.889034    2720 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:26.899114    2720 ssh_runner.go:195] Run: systemctl --version
I0513 22:51:26.899114    2720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
I0513 22:51:28.963326    2720 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 22:51:28.963326    2720 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:28.963326    2720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
I0513 22:51:31.331618    2720 main.go:141] libmachine: [stdout =====>] : 172.23.102.96

                                                
                                                
I0513 22:51:31.331789    2720 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:31.332358    2720 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
I0513 22:51:31.434958    2720 ssh_runner.go:235] Completed: systemctl --version: (4.5356867s)
I0513 22:51:31.443063    2720 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (6.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (23.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-129600 ssh pgrep buildkitd: exit status 1 (8.6972274s)

                                                
                                                
** stderr ** 
	W0513 22:51:26.277258   14284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image build -t localhost/my-image:functional-129600 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image build -t localhost/my-image:functional-129600 testdata\build --alsologtostderr: (8.7159151s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-129600 image build -t localhost/my-image:functional-129600 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in e9dd52ce11b1
---> Removed intermediate container e9dd52ce11b1
---> 192bda54ccd2
Step 3/3 : ADD content.txt /
---> a25781a15f32
Successfully built a25781a15f32
Successfully tagged localhost/my-image:functional-129600
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-129600 image build -t localhost/my-image:functional-129600 testdata\build --alsologtostderr:
W0513 22:51:34.968263   13464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0513 22:51:35.020254   13464 out.go:291] Setting OutFile to fd 840 ...
I0513 22:51:35.034579   13464 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:51:35.034579   13464 out.go:304] Setting ErrFile to fd 1012...
I0513 22:51:35.034579   13464 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 22:51:35.050652   13464 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:51:35.066093   13464 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 22:51:35.066683   13464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
I0513 22:51:36.990566   13464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 22:51:36.990566   13464 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:36.999316   13464 ssh_runner.go:195] Run: systemctl --version
I0513 22:51:36.999316   13464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-129600 ).state
I0513 22:51:38.959335   13464 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0513 22:51:38.959335   13464 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:38.960264   13464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-129600 ).networkadapters[0]).ipaddresses[0]
I0513 22:51:41.222610   13464 main.go:141] libmachine: [stdout =====>] : 172.23.102.96

                                                
                                                
I0513 22:51:41.222610   13464 main.go:141] libmachine: [stderr =====>] : 
I0513 22:51:41.222610   13464 sshutil.go:53] new ssh client: &{IP:172.23.102.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-129600\id_rsa Username:docker}
I0513 22:51:41.329452   13464 ssh_runner.go:235] Completed: systemctl --version: (4.3298718s)
I0513 22:51:41.329551   13464 build_images.go:161] Building image from path: C:\Users\jenkins.minikube5\AppData\Local\Temp\build.20096696.tar
I0513 22:51:41.340291   13464 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0513 22:51:41.368094   13464 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.20096696.tar
I0513 22:51:41.375704   13464 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.20096696.tar: stat -c "%s %y" /var/lib/minikube/build/build.20096696.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.20096696.tar': No such file or directory
I0513 22:51:41.375853   13464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\AppData\Local\Temp\build.20096696.tar --> /var/lib/minikube/build/build.20096696.tar (3072 bytes)
I0513 22:51:41.427465   13464 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.20096696
I0513 22:51:41.453008   13464 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.20096696 -xf /var/lib/minikube/build/build.20096696.tar
I0513 22:51:41.468008   13464 docker.go:360] Building image: /var/lib/minikube/build/build.20096696
I0513 22:51:41.475622   13464 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-129600 /var/lib/minikube/build/build.20096696
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0513 22:51:43.527821   13464 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-129600 /var/lib/minikube/build/build.20096696: (2.0521271s)
I0513 22:51:43.535544   13464 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.20096696
I0513 22:51:43.560556   13464 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.20096696.tar
I0513 22:51:43.579334   13464 build_images.go:217] Built localhost/my-image:functional-129600 from C:\Users\jenkins.minikube5\AppData\Local\Temp\build.20096696.tar
I0513 22:51:43.579451   13464 build_images.go:133] succeeded building to: functional-129600
I0513 22:51:43.579451   13464 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image ls: (6.5135065s)
E0513 22:53:32.760352    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (23.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.7339646s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-129600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (21.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image load --daemon gcr.io/google-containers/addon-resizer:functional-129600 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image load --daemon gcr.io/google-containers/addon-resizer:functional-129600 --alsologtostderr: (14.4090441s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image ls: (6.7806767s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (21.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (17.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image load --daemon gcr.io/google-containers/addon-resizer:functional-129600 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image load --daemon gcr.io/google-containers/addon-resizer:functional-129600 --alsologtostderr: (10.656615s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image ls: (6.6810244s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (17.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.7601226s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-129600
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image load --daemon gcr.io/google-containers/addon-resizer:functional-129600 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image load --daemon gcr.io/google-containers/addon-resizer:functional-129600 --alsologtostderr: (13.6216052s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image ls
E0513 22:49:55.928563    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image ls: (7.0155931s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.63s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (37.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-129600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-129600"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-129600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-129600": (24.9181586s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-129600 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-129600 docker-env | Invoke-Expression ; docker images": (12.9731388s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (37.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image save gcr.io/google-containers/addon-resizer:functional-129600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image save gcr.io/google-containers/addon-resizer:functional-129600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (8.9678525s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (13.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image rm gcr.io/google-containers/addon-resizer:functional-129600 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image rm gcr.io/google-containers/addon-resizer:functional-129600 --alsologtostderr: (6.9911598s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image ls: (6.7456704s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (13.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.1002154s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image ls: (7.8505766s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 update-context --alsologtostderr -v=2: (2.22042s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 update-context --alsologtostderr -v=2: (2.4065499s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 update-context --alsologtostderr -v=2: (2.1717344s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-129600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-129600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-129600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 14008: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 4988: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-129600 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-129600
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-129600 image save --daemon gcr.io/google-containers/addon-resizer:functional-129600 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-129600 image save --daemon gcr.io/google-containers/addon-resizer:functional-129600 --alsologtostderr: (9.0719817s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-129600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-129600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (39.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-129600 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [aa849966-bd0f-46e1-a67f-6ab2a50404a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [aa849966-bd0f-46e1-a67f-6ab2a50404a4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 39.019733s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (39.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-129600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1300: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.41s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-129600
--- PASS: TestFunctional/delete_addon-resizer_images (0.41s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-129600
--- PASS: TestFunctional/delete_my-image_image (0.15s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-129600
--- PASS: TestFunctional/delete_minikube_cached_images (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (646.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-586300 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0513 22:57:50.575491    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:57:50.591053    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:57:50.607128    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:57:50.638548    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:57:50.686034    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:57:50.780332    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:57:50.954714    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:57:51.283617    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:57:51.933536    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:57:53.219083    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:57:55.785845    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:58:00.914641    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:58:11.155557    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:58:31.651502    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 22:58:32.776927    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 22:59:12.615381    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 23:00:34.548012    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 23:02:50.583869    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 23:03:18.408893    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 23:03:32.792960    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-586300 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (10m13.3286723s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: (32.8486389s)
--- PASS: TestMultiControlPlane/serial/StartCluster (646.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-586300 -- rollout status deployment/busybox: (4.2191135s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-hd72c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-hd72c -- nslookup kubernetes.io: (2.1503697s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-njj9r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-v5w28 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-v5w28 -- nslookup kubernetes.io: (1.5154s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-hd72c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-njj9r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-v5w28 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-hd72c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-njj9r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-586300 -- exec busybox-fc5497c4f-v5w28 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (222.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-586300 -v=7 --alsologtostderr
E0513 23:07:50.616460    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 23:08:32.799646    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-586300 -v=7 --alsologtostderr: (3m0.4362162s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: (42.3831003s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (222.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-586300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (25.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (25.3815656s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (25.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (555.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 status --output json -v=7 --alsologtostderr: (42.9708784s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp testdata\cp-test.txt ha-586300:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp testdata\cp-test.txt ha-586300:/home/docker/cp-test.txt: (8.5346098s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test.txt": (8.5425342s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3745865926\001\cp-test_ha-586300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3745865926\001\cp-test_ha-586300.txt: (8.540452s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test.txt": (8.5534298s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300:/home/docker/cp-test.txt ha-586300-m02:/home/docker/cp-test_ha-586300_ha-586300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300:/home/docker/cp-test.txt ha-586300-m02:/home/docker/cp-test_ha-586300_ha-586300-m02.txt: (14.7930039s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test.txt": (8.5865984s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test_ha-586300_ha-586300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test_ha-586300_ha-586300-m02.txt": (8.3902481s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300:/home/docker/cp-test.txt ha-586300-m03:/home/docker/cp-test_ha-586300_ha-586300-m03.txt
E0513 23:12:50.607604    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300:/home/docker/cp-test.txt ha-586300-m03:/home/docker/cp-test_ha-586300_ha-586300-m03.txt: (14.7713332s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test.txt": (8.4323504s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test_ha-586300_ha-586300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test_ha-586300_ha-586300-m03.txt": (8.4636501s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300:/home/docker/cp-test.txt ha-586300-m04:/home/docker/cp-test_ha-586300_ha-586300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300:/home/docker/cp-test.txt ha-586300-m04:/home/docker/cp-test_ha-586300_ha-586300-m04.txt: (14.9349699s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test.txt"
E0513 23:13:32.809832    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test.txt": (8.5467356s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test_ha-586300_ha-586300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test_ha-586300_ha-586300-m04.txt": (8.4836937s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp testdata\cp-test.txt ha-586300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp testdata\cp-test.txt ha-586300-m02:/home/docker/cp-test.txt: (8.5048324s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test.txt": (8.5518693s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3745865926\001\cp-test_ha-586300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3745865926\001\cp-test_ha-586300-m02.txt: (8.5158922s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test.txt"
E0513 23:14:13.802066    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test.txt": (8.4542674s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m02:/home/docker/cp-test.txt ha-586300:/home/docker/cp-test_ha-586300-m02_ha-586300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m02:/home/docker/cp-test.txt ha-586300:/home/docker/cp-test_ha-586300-m02_ha-586300.txt: (14.8502442s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test.txt": (8.4413988s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test_ha-586300-m02_ha-586300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test_ha-586300-m02_ha-586300.txt": (8.47721s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m02:/home/docker/cp-test.txt ha-586300-m03:/home/docker/cp-test_ha-586300-m02_ha-586300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m02:/home/docker/cp-test.txt ha-586300-m03:/home/docker/cp-test_ha-586300-m02_ha-586300-m03.txt: (15.0146602s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test.txt": (8.6905981s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test_ha-586300-m02_ha-586300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test_ha-586300-m02_ha-586300-m03.txt": (8.811359s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m02:/home/docker/cp-test.txt ha-586300-m04:/home/docker/cp-test_ha-586300-m02_ha-586300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m02:/home/docker/cp-test.txt ha-586300-m04:/home/docker/cp-test_ha-586300-m02_ha-586300-m04.txt: (15.0029338s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test.txt": (8.3022793s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test_ha-586300-m02_ha-586300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test_ha-586300-m02_ha-586300-m04.txt": (8.4153894s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp testdata\cp-test.txt ha-586300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp testdata\cp-test.txt ha-586300-m03:/home/docker/cp-test.txt: (8.3332016s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test.txt": (8.293794s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3745865926\001\cp-test_ha-586300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3745865926\001\cp-test_ha-586300-m03.txt: (8.3164742s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test.txt": (8.2514549s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt ha-586300:/home/docker/cp-test_ha-586300-m03_ha-586300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt ha-586300:/home/docker/cp-test_ha-586300-m03_ha-586300.txt: (14.4149937s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test.txt": (8.1891265s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test_ha-586300-m03_ha-586300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test_ha-586300-m03_ha-586300.txt": (8.3313723s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt ha-586300-m02:/home/docker/cp-test_ha-586300-m03_ha-586300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt ha-586300-m02:/home/docker/cp-test_ha-586300-m03_ha-586300-m02.txt: (14.5510924s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test.txt": (8.2733692s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test_ha-586300-m03_ha-586300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test_ha-586300-m03_ha-586300-m02.txt": (8.2925611s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt ha-586300-m04:/home/docker/cp-test_ha-586300-m03_ha-586300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m03:/home/docker/cp-test.txt ha-586300-m04:/home/docker/cp-test_ha-586300-m03_ha-586300-m04.txt: (14.3503994s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test.txt"
E0513 23:17:50.632061    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test.txt": (8.2890568s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test_ha-586300-m03_ha-586300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test_ha-586300-m03_ha-586300-m04.txt": (8.2232306s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp testdata\cp-test.txt ha-586300-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp testdata\cp-test.txt ha-586300-m04:/home/docker/cp-test.txt: (8.3549663s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test.txt": (8.3168039s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3745865926\001\cp-test_ha-586300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3745865926\001\cp-test_ha-586300-m04.txt: (8.2919287s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test.txt": (8.2844922s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt ha-586300:/home/docker/cp-test_ha-586300-m04_ha-586300.txt
E0513 23:18:32.835961    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt ha-586300:/home/docker/cp-test_ha-586300-m04_ha-586300.txt: (14.4674509s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test.txt": (8.3605226s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test_ha-586300-m04_ha-586300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300 "sudo cat /home/docker/cp-test_ha-586300-m04_ha-586300.txt": (8.3399466s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt ha-586300-m02:/home/docker/cp-test_ha-586300-m04_ha-586300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt ha-586300-m02:/home/docker/cp-test_ha-586300-m04_ha-586300-m02.txt: (14.4252218s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test.txt": (8.2270602s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test_ha-586300-m04_ha-586300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m02 "sudo cat /home/docker/cp-test_ha-586300-m04_ha-586300-m02.txt": (8.3380754s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt ha-586300-m03:/home/docker/cp-test_ha-586300-m04_ha-586300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 cp ha-586300-m04:/home/docker/cp-test.txt ha-586300-m03:/home/docker/cp-test_ha-586300-m04_ha-586300-m03.txt: (14.5289327s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m04 "sudo cat /home/docker/cp-test.txt": (8.3176168s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test_ha-586300-m04_ha-586300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 ssh -n ha-586300-m03 "sudo cat /home/docker/cp-test_ha-586300-m04_ha-586300-m03.txt": (8.520769s)
--- PASS: TestMultiControlPlane/serial/CopyFile (555.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (67.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-586300 node stop m02 -v=7 --alsologtostderr: (33.3444157s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-586300 status -v=7 --alsologtostderr: exit status 7 (34.3853007s)

                                                
                                                
-- stdout --
	ha-586300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-586300-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-586300-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-586300-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:20:39.233436    4436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 23:20:39.289676    4436 out.go:291] Setting OutFile to fd 960 ...
	I0513 23:20:39.290444    4436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:20:39.290444    4436 out.go:304] Setting ErrFile to fd 708...
	I0513 23:20:39.290444    4436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 23:20:39.303575    4436 out.go:298] Setting JSON to false
	I0513 23:20:39.303640    4436 mustload.go:65] Loading cluster: ha-586300
	I0513 23:20:39.303640    4436 notify.go:220] Checking for updates...
	I0513 23:20:39.304410    4436 config.go:182] Loaded profile config "ha-586300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 23:20:39.304484    4436 status.go:255] checking status of ha-586300 ...
	I0513 23:20:39.305315    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:20:41.330015    4436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:20:41.330090    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:20:41.330090    4436 status.go:330] ha-586300 host status = "Running" (err=<nil>)
	I0513 23:20:41.330090    4436 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:20:41.330809    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:20:43.320426    4436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:20:43.320426    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:20:43.320426    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:20:45.668715    4436 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:20:45.668789    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:20:45.668789    4436 host.go:66] Checking if "ha-586300" exists ...
	I0513 23:20:45.677464    4436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 23:20:45.677464    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300 ).state
	I0513 23:20:47.651531    4436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:20:47.651673    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:20:47.651673    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300 ).networkadapters[0]).ipaddresses[0]
	I0513 23:20:50.029933    4436 main.go:141] libmachine: [stdout =====>] : 172.23.102.229
	
	I0513 23:20:50.029973    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:20:50.030113    4436 sshutil.go:53] new ssh client: &{IP:172.23.102.229 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300\id_rsa Username:docker}
	I0513 23:20:50.127430    4436 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4497534s)
	I0513 23:20:50.136815    4436 ssh_runner.go:195] Run: systemctl --version
	I0513 23:20:50.159192    4436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:20:50.194290    4436 kubeconfig.go:125] found "ha-586300" server: "https://172.23.111.254:8443"
	I0513 23:20:50.194290    4436 api_server.go:166] Checking apiserver status ...
	I0513 23:20:50.203006    4436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:20:50.239932    4436 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2038/cgroup
	W0513 23:20:50.258893    4436 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2038/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0513 23:20:50.270246    4436 ssh_runner.go:195] Run: ls
	I0513 23:20:50.277562    4436 api_server.go:253] Checking apiserver healthz at https://172.23.111.254:8443/healthz ...
	I0513 23:20:50.383371    4436 api_server.go:279] https://172.23.111.254:8443/healthz returned 200:
	ok
	I0513 23:20:50.383371    4436 status.go:422] ha-586300 apiserver status = Running (err=<nil>)
	I0513 23:20:50.383451    4436 status.go:257] ha-586300 status: &{Name:ha-586300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0513 23:20:50.383451    4436 status.go:255] checking status of ha-586300-m02 ...
	I0513 23:20:50.384093    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m02 ).state
	I0513 23:20:52.339856    4436 main.go:141] libmachine: [stdout =====>] : Off
	
	I0513 23:20:52.339856    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:20:52.339925    4436 status.go:330] ha-586300-m02 host status = "Stopped" (err=<nil>)
	I0513 23:20:52.339925    4436 status.go:343] host is not running, skipping remaining checks
	I0513 23:20:52.339925    4436 status.go:257] ha-586300-m02 status: &{Name:ha-586300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0513 23:20:52.339925    4436 status.go:255] checking status of ha-586300-m03 ...
	I0513 23:20:52.340633    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:20:54.294369    4436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:20:54.296776    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:20:54.296776    4436 status.go:330] ha-586300-m03 host status = "Running" (err=<nil>)
	I0513 23:20:54.296776    4436 host.go:66] Checking if "ha-586300-m03" exists ...
	I0513 23:20:54.297479    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:20:56.232457    4436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:20:56.232641    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:20:56.232641    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:20:58.544342    4436 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:20:58.544342    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:20:58.544511    4436 host.go:66] Checking if "ha-586300-m03" exists ...
	I0513 23:20:58.553995    4436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 23:20:58.553995    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m03 ).state
	I0513 23:21:00.463076    4436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:21:00.463076    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:00.463837    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m03 ).networkadapters[0]).ipaddresses[0]
	I0513 23:21:02.781649    4436 main.go:141] libmachine: [stdout =====>] : 172.23.109.129
	
	I0513 23:21:02.781649    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:02.782803    4436 sshutil.go:53] new ssh client: &{IP:172.23.109.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m03\id_rsa Username:docker}
	I0513 23:21:02.884928    4436 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3306534s)
	I0513 23:21:02.893706    4436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:21:02.923925    4436 kubeconfig.go:125] found "ha-586300" server: "https://172.23.111.254:8443"
	I0513 23:21:02.923925    4436 api_server.go:166] Checking apiserver status ...
	I0513 23:21:02.935923    4436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 23:21:02.974992    4436 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2188/cgroup
	W0513 23:21:02.992447    4436 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2188/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0513 23:21:03.000963    4436 ssh_runner.go:195] Run: ls
	I0513 23:21:03.007913    4436 api_server.go:253] Checking apiserver healthz at https://172.23.111.254:8443/healthz ...
	I0513 23:21:03.014405    4436 api_server.go:279] https://172.23.111.254:8443/healthz returned 200:
	ok
	I0513 23:21:03.014405    4436 status.go:422] ha-586300-m03 apiserver status = Running (err=<nil>)
	I0513 23:21:03.014405    4436 status.go:257] ha-586300-m03 status: &{Name:ha-586300-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0513 23:21:03.014405    4436 status.go:255] checking status of ha-586300-m04 ...
	I0513 23:21:03.014940    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m04 ).state
	I0513 23:21:04.942500    4436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:21:04.942500    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:04.942500    4436 status.go:330] ha-586300-m04 host status = "Running" (err=<nil>)
	I0513 23:21:04.942500    4436 host.go:66] Checking if "ha-586300-m04" exists ...
	I0513 23:21:04.943229    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m04 ).state
	I0513 23:21:06.870632    4436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:21:06.871408    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:06.871408    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m04 ).networkadapters[0]).ipaddresses[0]
	I0513 23:21:09.159819    4436 main.go:141] libmachine: [stdout =====>] : 172.23.110.77
	
	I0513 23:21:09.159819    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:09.159819    4436 host.go:66] Checking if "ha-586300-m04" exists ...
	I0513 23:21:09.168753    4436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0513 23:21:09.168753    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-586300-m04 ).state
	I0513 23:21:11.073375    4436 main.go:141] libmachine: [stdout =====>] : Running
	
	I0513 23:21:11.074342    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:11.074401    4436 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-586300-m04 ).networkadapters[0]).ipaddresses[0]
	I0513 23:21:13.364221    4436 main.go:141] libmachine: [stdout =====>] : 172.23.110.77
	
	I0513 23:21:13.364221    4436 main.go:141] libmachine: [stderr =====>] : 
	I0513 23:21:13.364744    4436 sshutil.go:53] new ssh client: &{IP:172.23.110.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-586300-m04\id_rsa Username:docker}
	I0513 23:21:13.470889    4436 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3018451s)
	I0513 23:21:13.481575    4436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 23:21:13.506796    4436 status.go:257] ha-586300-m04 status: &{Name:ha-586300-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (67.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (18.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (18.9612973s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (18.96s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (176.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-023000 --driver=hyperv
E0513 23:28:32.858393    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 23:30:53.861297    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-023000 --driver=hyperv: (2m56.7741166s)
--- PASS: TestImageBuild/serial/Setup (176.78s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (8.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-023000
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-023000: (8.7772889s)
--- PASS: TestImageBuild/serial/NormalBuild (8.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (7.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-023000
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-023000: (7.752572s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (7.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (6.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-023000
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-023000: (6.6280852s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (6.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (6.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-023000
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-023000: (6.4414819s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (6.45s)

                                                
                                    
x
+
TestJSONOutput/start/Command (222s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-121100 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0513 23:32:50.676548    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 23:33:32.878676    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-121100 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m41.9985561s)
--- PASS: TestJSONOutput/start/Command (222.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.05s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-121100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-121100 --output=json --user=testUser: (7.0462806s)
--- PASS: TestJSONOutput/pause/Command (7.05s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (6.88s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-121100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-121100 --output=json --user=testUser: (6.8764792s)
--- PASS: TestJSONOutput/unpause/Command (6.88s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (37.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-121100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-121100 --output=json --user=testUser: (37.7585159s)
--- PASS: TestJSONOutput/stop/Command (37.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-427300 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-427300 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (218.4958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"03c6a7b9-79fd-4170-bd4f-9416bcc31796","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-427300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f9f3761-8cbf-4023-bb2b-8a93e60d801d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube5\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"aa0c76d4-2c4c-4153-a769-c3bddb1a64bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ab780397-8719-4782-b8dc-d3f3b5324707","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"583f4af2-eb9c-4820-aef8-ffacc7582a1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18872"}}
	{"specversion":"1.0","id":"f2fa3ad2-5c94-482c-a440-8daa0a6284df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0d4e83b9-224b-4b1b-a66f-5a932b72930c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 23:37:14.065732    5724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-427300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-427300
--- PASS: TestErrorJSONOutput (1.19s)

                                                
                                    
x
+
TestMainNoArgs (0.19s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.19s)

                                                
                                    
x
+
TestMinikubeProfile (484.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-515500 --driver=hyperv
E0513 23:37:50.694129    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 23:38:32.892752    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 23:39:56.105657    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-515500 --driver=hyperv: (2m57.6048539s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-515500 --driver=hyperv
E0513 23:42:50.713519    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-515500 --driver=hyperv: (2m58.5427997s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-515500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.0335195s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-515500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0513 23:43:32.907914    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (18.7853942s)
helpers_test.go:175: Cleaning up "second-515500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-515500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-515500: (44.8866334s)
helpers_test.go:175: Cleaning up "first-515500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-515500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-515500: (44.4364796s)
--- PASS: TestMinikubeProfile (484.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-433500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0513 23:47:33.919664    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-433500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m15.2626422s)
--- PASS: TestMountStart/serial/StartWithMountFirst (136.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-433500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-433500 ssh -- ls /minikube-host: (8.2977032s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (135.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-505400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0513 23:47:50.733079    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 23:48:32.932602    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-505400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m14.2631785s)
--- PASS: TestMountStart/serial/StartWithMountSecond (135.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (8.54s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-505400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-505400 ssh -- ls /minikube-host: (8.540618s)
--- PASS: TestMountStart/serial/VerifyMountSecond (8.54s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (25.13s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-433500 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-433500 --alsologtostderr -v=5: (25.1186738s)
--- PASS: TestMountStart/serial/DeleteFirst (25.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-505400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-505400 ssh -- ls /minikube-host: (8.255922s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (26.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-505400
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-505400: (26.1798036s)
--- PASS: TestMountStart/serial/Stop (26.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (102.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-505400
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-505400: (1m41.9165625s)
--- PASS: TestMountStart/serial/RestartStopped (102.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (8.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-505400 ssh -- ls /minikube-host
E0513 23:52:50.738230    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-505400 ssh -- ls /minikube-host: (8.3124381s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (8.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (381.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-101100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0513 23:53:32.945120    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 23:56:36.169394    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0513 23:57:50.767137    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0513 23:58:32.962074    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-101100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m0.2813838s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 status --alsologtostderr: (20.9566063s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (381.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- rollout status deployment/busybox: (2.87813s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-q7442 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-q7442 -- nslookup kubernetes.io: (1.9586524s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-xqj6w -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-q7442 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-xqj6w -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-q7442 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-101100 -- exec busybox-fc5497c4f-xqj6w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (205.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-101100 -v 3 --alsologtostderr
E0514 00:02:50.779608    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0514 00:03:32.974434    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-101100 -v 3 --alsologtostderr: (2m54.2301556s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 status --alsologtostderr: (31.6400364s)
--- PASS: TestMultiNode/serial/AddNode (205.87s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-101100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (10.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0514 00:04:13.983183    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.5331641s)
--- PASS: TestMultiNode/serial/ProfileList (10.53s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (311.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 status --output json --alsologtostderr: (31.8377387s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp testdata\cp-test.txt multinode-101100:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp testdata\cp-test.txt multinode-101100:/home/docker/cp-test.txt: (8.3882053s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test.txt": (8.4695327s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile439564435\001\cp-test_multinode-101100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile439564435\001\cp-test_multinode-101100.txt: (8.3855382s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test.txt": (8.3336693s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100:/home/docker/cp-test.txt multinode-101100-m02:/home/docker/cp-test_multinode-101100_multinode-101100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100:/home/docker/cp-test.txt multinode-101100-m02:/home/docker/cp-test_multinode-101100_multinode-101100-m02.txt: (14.0467674s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test.txt": (8.0775848s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test_multinode-101100_multinode-101100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test_multinode-101100_multinode-101100-m02.txt": (8.0942184s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100:/home/docker/cp-test.txt multinode-101100-m03:/home/docker/cp-test_multinode-101100_multinode-101100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100:/home/docker/cp-test.txt multinode-101100-m03:/home/docker/cp-test_multinode-101100_multinode-101100-m03.txt: (14.163227s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test.txt": (8.0968909s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test_multinode-101100_multinode-101100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test_multinode-101100_multinode-101100-m03.txt": (8.1423586s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp testdata\cp-test.txt multinode-101100-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp testdata\cp-test.txt multinode-101100-m02:/home/docker/cp-test.txt: (8.1778381s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test.txt": (8.0668483s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile439564435\001\cp-test_multinode-101100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile439564435\001\cp-test_multinode-101100-m02.txt: (8.0480293s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test.txt": (8.0865817s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt multinode-101100:/home/docker/cp-test_multinode-101100-m02_multinode-101100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt multinode-101100:/home/docker/cp-test_multinode-101100-m02_multinode-101100.txt: (14.0786257s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test.txt": (8.0699962s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test_multinode-101100-m02_multinode-101100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test_multinode-101100-m02_multinode-101100.txt": (8.0776334s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt multinode-101100-m03:/home/docker/cp-test_multinode-101100-m02_multinode-101100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m02:/home/docker/cp-test.txt multinode-101100-m03:/home/docker/cp-test_multinode-101100-m02_multinode-101100-m03.txt: (14.1095435s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test.txt"
E0514 00:07:50.797920    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test.txt": (8.0625591s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test_multinode-101100-m02_multinode-101100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test_multinode-101100-m02_multinode-101100-m03.txt": (8.0753047s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp testdata\cp-test.txt multinode-101100-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp testdata\cp-test.txt multinode-101100-m03:/home/docker/cp-test.txt: (8.1970306s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test.txt": (8.1458419s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile439564435\001\cp-test_multinode-101100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile439564435\001\cp-test_multinode-101100-m03.txt: (8.1412627s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test.txt"
E0514 00:08:32.997921    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test.txt": (8.0117755s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt multinode-101100:/home/docker/cp-test_multinode-101100-m03_multinode-101100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt multinode-101100:/home/docker/cp-test_multinode-101100-m03_multinode-101100.txt: (14.0886146s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test.txt": (8.0951023s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test_multinode-101100-m03_multinode-101100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100 "sudo cat /home/docker/cp-test_multinode-101100-m03_multinode-101100.txt": (8.0573414s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt multinode-101100-m02:/home/docker/cp-test_multinode-101100-m03_multinode-101100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 cp multinode-101100-m03:/home/docker/cp-test.txt multinode-101100-m02:/home/docker/cp-test_multinode-101100-m03_multinode-101100-m02.txt: (14.0761726s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m03 "sudo cat /home/docker/cp-test.txt": (8.0340515s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test_multinode-101100-m03_multinode-101100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 ssh -n multinode-101100-m02 "sudo cat /home/docker/cp-test_multinode-101100-m03_multinode-101100-m02.txt": (8.0302742s)
--- PASS: TestMultiNode/serial/CopyFile (311.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (67.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 node stop m03: (22.3432718s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-101100 status: exit status 7 (22.1784589s)

                                                
                                                
-- stdout --
	multinode-101100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-101100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-101100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:09:56.749310   13808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-101100 status --alsologtostderr: exit status 7 (22.4827797s)

                                                
                                                
-- stdout --
	multinode-101100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-101100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-101100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:10:18.931804    7568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0514 00:10:18.984313    7568 out.go:291] Setting OutFile to fd 748 ...
	I0514 00:10:18.989720    7568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 00:10:18.989720    7568 out.go:304] Setting ErrFile to fd 820...
	I0514 00:10:18.989720    7568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0514 00:10:19.002593    7568 out.go:298] Setting JSON to false
	I0514 00:10:19.002658    7568 mustload.go:65] Loading cluster: multinode-101100
	I0514 00:10:19.002793    7568 notify.go:220] Checking for updates...
	I0514 00:10:19.003395    7568 config.go:182] Loaded profile config "multinode-101100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0514 00:10:19.003395    7568 status.go:255] checking status of multinode-101100 ...
	I0514 00:10:19.004188    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:10:20.871751    7568 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:10:20.881467    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:20.881467    7568 status.go:330] multinode-101100 host status = "Running" (err=<nil>)
	I0514 00:10:20.881543    7568 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:10:20.882066    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:10:22.759963    7568 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:10:22.759963    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:22.759963    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:10:24.920166    7568 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0514 00:10:24.929910    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:24.929910    7568 host.go:66] Checking if "multinode-101100" exists ...
	I0514 00:10:24.938884    7568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0514 00:10:24.939055    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100 ).state
	I0514 00:10:26.759068    7568 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:10:26.759068    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:26.759146    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100 ).networkadapters[0]).ipaddresses[0]
	I0514 00:10:28.941693    7568 main.go:141] libmachine: [stdout =====>] : 172.23.106.39
	
	I0514 00:10:28.941693    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:28.950931    7568 sshutil.go:53] new ssh client: &{IP:172.23.106.39 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100\id_rsa Username:docker}
	I0514 00:10:29.052406    7568 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.1132114s)
	I0514 00:10:29.064463    7568 ssh_runner.go:195] Run: systemctl --version
	I0514 00:10:29.084130    7568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 00:10:29.106639    7568 kubeconfig.go:125] found "multinode-101100" server: "https://172.23.106.39:8443"
	I0514 00:10:29.106715    7568 api_server.go:166] Checking apiserver status ...
	I0514 00:10:29.114988    7568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0514 00:10:29.149023    7568 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1934/cgroup
	W0514 00:10:29.173861    7568 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1934/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0514 00:10:29.188667    7568 ssh_runner.go:195] Run: ls
	I0514 00:10:29.195135    7568 api_server.go:253] Checking apiserver healthz at https://172.23.106.39:8443/healthz ...
	I0514 00:10:29.206204    7568 api_server.go:279] https://172.23.106.39:8443/healthz returned 200:
	ok
	I0514 00:10:29.206204    7568 status.go:422] multinode-101100 apiserver status = Running (err=<nil>)
	I0514 00:10:29.206204    7568 status.go:257] multinode-101100 status: &{Name:multinode-101100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0514 00:10:29.206204    7568 status.go:255] checking status of multinode-101100-m02 ...
	I0514 00:10:29.206816    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:10:31.059081    7568 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:10:31.059146    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:31.059146    7568 status.go:330] multinode-101100-m02 host status = "Running" (err=<nil>)
	I0514 00:10:31.059146    7568 host.go:66] Checking if "multinode-101100-m02" exists ...
	I0514 00:10:31.060082    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:10:32.940917    7568 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:10:32.941373    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:32.941447    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:10:35.166309    7568 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0514 00:10:35.166584    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:35.166655    7568 host.go:66] Checking if "multinode-101100-m02" exists ...
	I0514 00:10:35.182506    7568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0514 00:10:35.182506    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m02 ).state
	I0514 00:10:37.057574    7568 main.go:141] libmachine: [stdout =====>] : Running
	
	I0514 00:10:37.057574    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:37.057652    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-101100-m02 ).networkadapters[0]).ipaddresses[0]
	I0514 00:10:39.317462    7568 main.go:141] libmachine: [stdout =====>] : 172.23.109.58
	
	I0514 00:10:39.317536    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:39.317748    7568 sshutil.go:53] new ssh client: &{IP:172.23.109.58 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-101100-m02\id_rsa Username:docker}
	I0514 00:10:39.415508    7568 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.2326236s)
	I0514 00:10:39.424518    7568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0514 00:10:39.446730    7568 status.go:257] multinode-101100-m02 status: &{Name:multinode-101100-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0514 00:10:39.446878    7568 status.go:255] checking status of multinode-101100-m03 ...
	I0514 00:10:39.447925    7568 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-101100-m03 ).state
	I0514 00:10:41.310549    7568 main.go:141] libmachine: [stdout =====>] : Off
	
	I0514 00:10:41.310549    7568 main.go:141] libmachine: [stderr =====>] : 
	I0514 00:10:41.311589    7568 status.go:330] multinode-101100-m03 host status = "Stopped" (err=<nil>)
	I0514 00:10:41.311589    7568 status.go:343] host is not running, skipping remaining checks
	I0514 00:10:41.311589    7568 status.go:257] multinode-101100-m03 status: &{Name:multinode-101100-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (67.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (161.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 node start m03 -v=7 --alsologtostderr
E0514 00:12:50.818057    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 node start m03 -v=7 --alsologtostderr: (2m10.3843534s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-101100 status -v=7 --alsologtostderr
E0514 00:13:16.236353    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-101100 status -v=7 --alsologtostderr: (31.4602522s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (161.99s)

                                                
                                    
x
+
TestPreload (449.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-204600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0514 00:27:50.875411    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0514 00:28:33.071640    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
E0514 00:29:56.301244    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-204600 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m37.3136174s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-204600 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-204600 image pull gcr.io/k8s-minikube/busybox: (7.7701084s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-204600
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-204600: (37.3933096s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-204600 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0514 00:32:50.884235    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-204600 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m22.0701586s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-204600 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-204600 image list: (6.2827819s)
helpers_test.go:175: Cleaning up "test-preload-204600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-204600
E0514 00:33:33.093783    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-204600: (38.9671973s)
--- PASS: TestPreload (449.81s)

                                                
                                    
x
+
TestScheduledStopWindows (302.26s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-589400 --memory=2048 --driver=hyperv
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-589400 --memory=2048 --driver=hyperv: (2m55.2411658s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-589400 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-589400 --schedule 5m: (9.5981735s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-589400 -n scheduled-stop-589400
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-589400 -n scheduled-stop-589400: exit status 1 (10.0174844s)

                                                
                                                
** stderr ** 
	W0514 00:37:07.171691    3904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-589400 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-589400 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.3824363s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-589400 --schedule 5s
E0514 00:37:34.118330    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-589400 --schedule 5s: (9.3255185s)
E0514 00:37:50.914339    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
E0514 00:38:33.111023    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-589400
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-589400: exit status 7 (2.0778242s)

                                                
                                                
-- stdout --
	scheduled-stop-589400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:38:34.915392    2296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-589400 -n scheduled-stop-589400
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-589400 -n scheduled-stop-589400: exit status 7 (2.0986217s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:38:36.992868   11044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-589400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-589400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-589400: (25.513711s)
--- PASS: TestScheduledStopWindows (302.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (814.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.3011831911.exe start -p running-upgrade-240100 --memory=2200 --vm-driver=hyperv
E0514 00:43:33.133132    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.3011831911.exe start -p running-upgrade-240100 --memory=2200 --vm-driver=hyperv: (5m21.9058489s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-240100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0514 00:48:33.147652    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-240100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m8.9123011s)
helpers_test.go:175: Cleaning up "running-upgrade-240100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-240100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-240100: (1m2.9698965s)
--- PASS: TestRunningBinaryUpgrade (814.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (1140.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-650500 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-650500 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (7m20.9642259s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-650500
E0514 00:46:36.378779    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-596400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-650500: (38.1471358s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-650500 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-650500 status --format={{.Host}}: exit status 7 (2.2473899s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:47:03.773249    5696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-650500 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
E0514 00:47:50.946332    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-650500 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: (6m44.0914435s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-650500 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-650500 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-650500 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (253.695ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-650500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:53:50.271199    7632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-650500
	    minikube start -p kubernetes-upgrade-650500 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6505002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-650500 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-650500 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
E0514 00:54:14.194275    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-650500 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: (3m25.8540841s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-650500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-650500
E0514 00:57:50.983360    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-650500: (48.6859769s)
--- PASS: TestKubernetesUpgrade (1140.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-650500 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-650500 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (310.2801ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-650500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 00:39:04.622886    1600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (767.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2306301341.exe start -p stopped-upgrade-597400 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2306301341.exe start -p stopped-upgrade-597400 --memory=2200 --vm-driver=hyperv: (6m24.3714461s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2306301341.exe -p stopped-upgrade-597400 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2306301341.exe -p stopped-upgrade-597400 stop: (34.318089s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-597400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-597400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (5m48.4673238s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (767.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (8.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-597400
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-597400: (8.7063617s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (8.71s)

                                                
                                    
x
+
TestPause/serial/Start (447.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-851700 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-851700 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (7m27.967549s)
--- PASS: TestPause/serial/Start (447.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (400.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-851700 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-851700 --alsologtostderr -v=1 --driver=hyperv: (6m40.8633833s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (400.90s)

                                                
                                    
x
+
TestPause/serial/Pause (9.09s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-851700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-851700 --alsologtostderr -v=5: (9.0888196s)
--- PASS: TestPause/serial/Pause (9.09s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-851700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-851700 --output=json --layout=cluster: exit status 2 (13.0020739s)

                                                
                                                
-- stdout --
	{"Name":"pause-851700","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-851700","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0514 01:12:23.115650   13820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (13.00s)

                                                
                                    
x
+
TestPause/serial/Unpause (8.36s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-851700 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-851700 --alsologtostderr -v=5: (8.3580332s)
--- PASS: TestPause/serial/Unpause (8.36s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.64s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-851700 --alsologtostderr -v=5
E0514 01:12:51.041385    5984 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-129600\client.crt: The system cannot find the path specified.
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-851700 --alsologtostderr -v=5: (8.636716s)
--- PASS: TestPause/serial/PauseAgain (8.64s)

                                                
                                    

Test skip (30/210)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-129600 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-129600 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 2468: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-129600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-129600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0271572s)

                                                
                                                
-- stdout --
	* [functional-129600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:48:38.889320    6464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 22:48:38.960611    6464 out.go:291] Setting OutFile to fd 876 ...
	I0513 22:48:38.961141    6464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:48:38.961141    6464 out.go:304] Setting ErrFile to fd 772...
	I0513 22:48:38.961141    6464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:48:38.984094    6464 out.go:298] Setting JSON to false
	I0513 22:48:38.987841    6464 start.go:129] hostinfo: {"hostname":"minikube5","uptime":2082,"bootTime":1715638436,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0513 22:48:38.988533    6464 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:48:38.993406    6464 out.go:177] * [functional-129600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:48:38.995979    6464 notify.go:220] Checking for updates...
	I0513 22:48:38.996229    6464 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:48:39.001679    6464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 22:48:39.004418    6464 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0513 22:48:39.007392    6464 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 22:48:39.009955    6464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 22:48:39.013721    6464 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:48:39.014719    6464 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-129600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-129600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.025067s)

                                                
                                                
-- stdout --
	* [functional-129600] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18872
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0513 22:48:33.865354    9236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0513 22:48:33.950689    9236 out.go:291] Setting OutFile to fd 608 ...
	I0513 22:48:33.951619    9236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:48:33.951619    9236 out.go:304] Setting ErrFile to fd 920...
	I0513 22:48:33.951672    9236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 22:48:33.972644    9236 out.go:298] Setting JSON to false
	I0513 22:48:33.977001    9236 start.go:129] hostinfo: {"hostname":"minikube5","uptime":2077,"bootTime":1715638436,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4355 Build 19045.4355","kernelVersion":"10.0.19045.4355 Build 19045.4355","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0513 22:48:33.977001    9236 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 22:48:33.982949    9236 out.go:177] * [functional-129600] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4355 Build 19045.4355
	I0513 22:48:33.988730    9236 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0513 22:48:33.988125    9236 notify.go:220] Checking for updates...
	I0513 22:48:33.994616    9236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 22:48:33.996862    9236 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0513 22:48:33.999637    9236 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 22:48:34.001910    9236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 22:48:34.005105    9236 config.go:182] Loaded profile config "functional-129600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 22:48:34.005749    9236 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard